To content
Lecture Series

AI Colloquium

The AI Colloquium is a series of lectures dedicated to cutting-edge research in the field of machine learning and artificial intelligence, coorganized by the Lamarr Institute for Machine Learning and Artificial Intelligence (Lamarr Institute), the Research Center Trustworthy Data Science and Security (RC Trust), and the Center for Data Science & Simulation at TU Dortmund University (DoDas).

Programme

Distinguished researchers deliver captivating lectures followed by vibrant discussions. However, unlike traditional colloquia, the AI Colloquium prioritizes interactive dialogue, fostering international collaboration. Conducted primarily in English, these 90-minute sessions feature hour-long lectures and 30-minute Q&A sessions. Join every Thursday at 10 AM c.t. for a stimulating exploration of cutting-edge topics. Whether in-person at our Lecture Room on Fraunhofer Strasse 25 or via Zoom, our hybrid format ensures accessibility for all.

Day (usually) Thursday
Start and end time 10 AM c.t. - 12 AM
Duration of Presentation 60 Minutes
Location (usually) Lecture Room 303
3. Floor
Fraunhofer Strasse 25
Dortmund

Upcomming Events

Safe Learning Systems - Artificial Intelligence and Formal Methods

Start: End: Location: JvF25/3-303 - Conference Room (Lamarr/RC Trust Dortmund)
Event type:
  • Lamarr
Nils Jansen © Nils Jansen
Prof. Nils Jansen (Ruhr University Bochum)

Abstract: Artificial Intelligence (AI) has emerged as a disruptive force in our society. The increasing applications in healthcare, transportation, military, and other fields underscore the critical need for a comprehensive understanding of the robustness of an AI’s decision-making process. Neurosymbolic AI seeks to develop robust and safe AI systems by combining neural and symbolic AI techniques. We highlight the role of formal methods in such techniques, serving as a rigorous and structured backbone for symbolic AI methods.
We focus on a specific branch of formal methods, namely formal verification, with a particular emphasis on model checking. The most famous application of model checking in AI is in reinforcement learning (RL). RL carries the promise that autonomous systems can learn to operate in unfamiliar environments with minimal human intervention. However, why haven’t most autonomous systems implemented RL yet? The answer is simple: there are significant unsolved challenges. One of the most important ones is obvious: autonomous systems operate in unfamiliar and unknown environments. This lack of knowledge is referred to as uncertainty. Uncertainty, however, presents a problem when one seeks to employ rigorous state-based techniques, such as model checking. 
 
In this talk, we explore how various aspects of uncertainty can enter a formal system model to achieve trustworthiness, reliability, and safety in RL. The presented results range from robust Markov decision processes over stochastic games to multi-environment models. Moreover, we explore the direct connection of deep (neural) reinforcement learning with the (symbolic) model-based analysis and verification of safety-critical systems.

About the Speaker

Prof. Nils Jansen

Nils Jansen © Nils Jansen

Bio: Nils Jansen is a professor at the Ruhr University Bochum, Germany, and leads the chair of Artificial Intelligence and Formal Methods. He is also an ELLIS fellow and a full professor of Safe and Dependable AI at Radboud University, Nijmegen, The Netherlands. The mission of his research is to increase the trustworthiness of Artificial Intelligence (AI). He was a research associate at the University of Texas at Austin and received his Ph.D. with distinction from RWTH Aachen University, Germany. He holds several grants in academic and industrial settings, including an ERC starting grant titled Data-Driven Verification and Learning Under Uncertainty (DEUCE).

Archiv

Past Events

Safe Learning Systems - Artificial Intelligence and Formal Methods

Start: End: Location: JvF25/3-303 - Conference Room (Lamarr/RC Trust Dortmund)
Event type:
  • Lamarr
Nils Jansen © Nils Jansen
Prof. Nils Jansen (Ruhr University Bochum)

Abstract: Artificial Intelligence (AI) has emerged as a disruptive force in our society. The increasing applications in healthcare, transportation, military, and other fields underscore the critical need for a comprehensive understanding of the robustness of an AI’s decision-making process. Neurosymbolic AI seeks to develop robust and safe AI systems by combining neural and symbolic AI techniques. We highlight the role of formal methods in such techniques, serving as a rigorous and structured backbone for symbolic AI methods.
We focus on a specific branch of formal methods, namely formal verification, with a particular emphasis on model checking. The most famous application of model checking in AI is in reinforcement learning (RL). RL carries the promise that autonomous systems can learn to operate in unfamiliar environments with minimal human intervention. However, why haven’t most autonomous systems implemented RL yet? The answer is simple: there are significant unsolved challenges. One of the most important ones is obvious: autonomous systems operate in unfamiliar and unknown environments. This lack of knowledge is referred to as uncertainty. Uncertainty, however, presents a problem when one seeks to employ rigorous state-based techniques, such as model checking. 
 
In this talk, we explore how various aspects of uncertainty can enter a formal system model to achieve trustworthiness, reliability, and safety in RL. The presented results range from robust Markov decision processes over stochastic games to multi-environment models. Moreover, we explore the direct connection of deep (neural) reinforcement learning with the (symbolic) model-based analysis and verification of safety-critical systems.

About the Speaker

Prof. Nils Jansen

Nils Jansen © Nils Jansen

Bio: Nils Jansen is a professor at the Ruhr University Bochum, Germany, and leads the chair of Artificial Intelligence and Formal Methods. He is also an ELLIS fellow and a full professor of Safe and Dependable AI at Radboud University, Nijmegen, The Netherlands. The mission of his research is to increase the trustworthiness of Artificial Intelligence (AI). He was a research associate at the University of Texas at Austin and received his Ph.D. with distinction from RWTH Aachen University, Germany. He holds several grants in academic and industrial settings, including an ERC starting grant titled Data-Driven Verification and Learning Under Uncertainty (DEUCE).

Safe Learning Systems - Artificial Intelligence and Formal Methods

Start: End: Location: JvF25/3-303 - Conference Room (Lamarr/RC Trust Dortmund)
Event type:
  • Lamarr
Nils Jansen © Nils Jansen
Prof. Nils Jansen (Ruhr University Bochum)

Abstract: Artificial Intelligence (AI) has emerged as a disruptive force in our society. The increasing applications in healthcare, transportation, military, and other fields underscore the critical need for a comprehensive understanding of the robustness of an AI’s decision-making process. Neurosymbolic AI seeks to develop robust and safe AI systems by combining neural and symbolic AI techniques. We highlight the role of formal methods in such techniques, serving as a rigorous and structured backbone for symbolic AI methods.
We focus on a specific branch of formal methods, namely formal verification, with a particular emphasis on model checking. The most famous application of model checking in AI is in reinforcement learning (RL). RL carries the promise that autonomous systems can learn to operate in unfamiliar environments with minimal human intervention. However, why haven’t most autonomous systems implemented RL yet? The answer is simple: there are significant unsolved challenges. One of the most important ones is obvious: autonomous systems operate in unfamiliar and unknown environments. This lack of knowledge is referred to as uncertainty. Uncertainty, however, presents a problem when one seeks to employ rigorous state-based techniques, such as model checking. 
 
In this talk, we explore how various aspects of uncertainty can enter a formal system model to achieve trustworthiness, reliability, and safety in RL. The presented results range from robust Markov decision processes over stochastic games to multi-environment models. Moreover, we explore the direct connection of deep (neural) reinforcement learning with the (symbolic) model-based analysis and verification of safety-critical systems.

About the Speaker

Prof. Nils Jansen

Nils Jansen © Nils Jansen

Bio: Nils Jansen is a professor at the Ruhr University Bochum, Germany, and leads the chair of Artificial Intelligence and Formal Methods. He is also an ELLIS fellow and a full professor of Safe and Dependable AI at Radboud University, Nijmegen, The Netherlands. The mission of his research is to increase the trustworthiness of Artificial Intelligence (AI). He was a research associate at the University of Texas at Austin and received his Ph.D. with distinction from RWTH Aachen University, Germany. He holds several grants in academic and industrial settings, including an ERC starting grant titled Data-Driven Verification and Learning Under Uncertainty (DEUCE).