AI Colloquium
The AI Colloquium is a series of lectures dedicated to cutting-edge research in the field of machine learning and artificial intelligence, coorganized by the Lamarr Institute for Machine Learning and Artificial Intelligence (Lamarr Institute), the Research Center Trustworthy Data Science and Security (RC Trust), and the Center for Data Science & Simulation at TU Dortmund University (DoDas).
Programme
Distinguished researchers deliver captivating lectures followed by vibrant discussions. However, unlike traditional colloquia, the AI Colloquium prioritizes interactive dialogue, fostering international collaboration. Conducted primarily in English, these 90-minute sessions feature hour-long lectures and 30-minute Q&A sessions. Join every Thursday at 10 AM c.t. for a stimulating exploration of cutting-edge topics. Whether in-person at our Lecture Room on Fraunhofer Strasse 25 or via Zoom, our hybrid format ensures accessibility for all.
Day (usually) | Thursday |
Start and end time | 10 AM c.t. - 12 AM |
Duration of Presentation | 60 Minutes |
Location (usually) | Lecture Room 303 3. Floor Fraunhofer Strasse 25 Dortmund |
Upcomming Events
Computing and evaluating visual explanations - Simone Schaub-Meyer
- RC Trust
Astract: Recent developments in deep learning have led to significant advances in many areas of computer vision. However, especially in safety critical scenarios, we are not only interested in task specific performance but there is a critical need to be able to explain the decision process of a deep neural networks despite its complexity. Visual explanations can help to demystify the inner workings of these models, providing insights into their decision-making processes. In my talk I will first talk about how we can obtain visual explanations efficiently and effectively in case of image classification. In the second part I will talk about potential metrics and frameworks for assessing the quality visual explanations. A challenging task due to the difficulty of obtaining ground truth explanations for evaluation.
Dr. Simone Schaub-Meyer
Short Bio: Simone Schaub-Meyer is an independent research group leader at the Technical University of Darmstadt, as well as affiliated with the Hessian Center for Artificial Intelligence. She recently got the renowned Emmy Noether Programme (ENP) grant of the German Research Foundation (DFG) supporting her research on Interpretable Neural Networks for Dense Image and Video Analysis. The focus of her research is on developing efficient, robust, and understandable methods and algorithms for image and video analysis. Prior to joining TU Darmstadt, she was a postdoctoral researcher at the Media Technology Lab at ETH Zurich working on augmented reality. She also obtained her doctoral degree from ETH Zurich in 2018, where she developed novel methods for motion representation and video frame interpolation in collaboration with Disney Research Zurich.
Past Events
Computing and evaluating visual explanations - Simone Schaub-Meyer
- RC Trust
Astract: Recent developments in deep learning have led to significant advances in many areas of computer vision. However, especially in safety critical scenarios, we are not only interested in task specific performance but there is a critical need to be able to explain the decision process of a deep neural networks despite its complexity. Visual explanations can help to demystify the inner workings of these models, providing insights into their decision-making processes. In my talk I will first talk about how we can obtain visual explanations efficiently and effectively in case of image classification. In the second part I will talk about potential metrics and frameworks for assessing the quality visual explanations. A challenging task due to the difficulty of obtaining ground truth explanations for evaluation.
Dr. Simone Schaub-Meyer
Short Bio: Simone Schaub-Meyer is an independent research group leader at the Technical University of Darmstadt, as well as affiliated with the Hessian Center for Artificial Intelligence. She recently got the renowned Emmy Noether Programme (ENP) grant of the German Research Foundation (DFG) supporting her research on Interpretable Neural Networks for Dense Image and Video Analysis. The focus of her research is on developing efficient, robust, and understandable methods and algorithms for image and video analysis. Prior to joining TU Darmstadt, she was a postdoctoral researcher at the Media Technology Lab at ETH Zurich working on augmented reality. She also obtained her doctoral degree from ETH Zurich in 2018, where she developed novel methods for motion representation and video frame interpolation in collaboration with Disney Research Zurich.
Computing and evaluating visual explanations - Simone Schaub-Meyer
- RC Trust
Astract: Recent developments in deep learning have led to significant advances in many areas of computer vision. However, especially in safety critical scenarios, we are not only interested in task specific performance but there is a critical need to be able to explain the decision process of a deep neural networks despite its complexity. Visual explanations can help to demystify the inner workings of these models, providing insights into their decision-making processes. In my talk I will first talk about how we can obtain visual explanations efficiently and effectively in case of image classification. In the second part I will talk about potential metrics and frameworks for assessing the quality visual explanations. A challenging task due to the difficulty of obtaining ground truth explanations for evaluation.
Dr. Simone Schaub-Meyer
Short Bio: Simone Schaub-Meyer is an independent research group leader at the Technical University of Darmstadt, as well as affiliated with the Hessian Center for Artificial Intelligence. She recently got the renowned Emmy Noether Programme (ENP) grant of the German Research Foundation (DFG) supporting her research on Interpretable Neural Networks for Dense Image and Video Analysis. The focus of her research is on developing efficient, robust, and understandable methods and algorithms for image and video analysis. Prior to joining TU Darmstadt, she was a postdoctoral researcher at the Media Technology Lab at ETH Zurich working on augmented reality. She also obtained her doctoral degree from ETH Zurich in 2018, where she developed novel methods for motion representation and video frame interpolation in collaboration with Disney Research Zurich.