To content
Lecture Series

AI Colloquium

The AI Colloquium is a series of lectures dedicated to cutting-edge research in the field of machine learning and artificial intelligence, coorganized by the Lamarr Institute for Machine Learning and Artificial Intelligence (Lamarr Institute), the Research Center Trustworthy Data Science and Security (RC Trust), and the Center for Data Science & Simulation at TU Dortmund University (DoDas).

Programme

Distinguished researchers deliver captivating lectures followed by vibrant discussions. However, unlike traditional colloquia, the AI Colloquium prioritizes interactive dialogue, fostering international collaboration. Conducted primarily in English, these 90-minute sessions feature hour-long lectures and 30-minute Q&A sessions. Join every Thursday at 10 AM c.t. for a stimulating exploration of cutting-edge topics. Whether in-person at our Lecture Room on Fraunhofer Strasse 25 or via Zoom, our hybrid format ensures accessibility for all.

Day (usually) Thursday
Start and end time 10 AM c.t. - 12 AM
Duration of Presentation 60 Minutes
Location (usually) Lecture Room 303
3. Floor
Fraunhofer Strasse 25
Dortmund

Upcomming Events

Towards reliable empirical evidence in methodological computational research: recent developments and remaining challenges

Begin: End: Location: Zoom
Event type:
  • RC Trust
Anne-Laure Boulesteix © Anne-Laure Boulesteix

Prof. Dr. Anne-Laure Boulesteix from LMU Munich

Abstract: Statisticians are often keen to analyze the statistical aspects of the so-called “replication crisis”. They condemn fishing expeditions and publication bias across empirical scientific fields applying statistical methods. But what about good practice issues in their own - methodological - research, i.e. research considering statistical (or more generally, computational) methods as research objects? When developing and evaluating new statistical methods and data analysis tools, do statisticians and data scientists adhere to the good practice principles they promote in fields which apply statistics and data science? I argue that methodological researchers should make substantial efforts to address what may be called the replication crisis in the context of methodological research in statistics and data science, in particular by trying to avoid bias in comparison studies based on simulated or real data. I discuss topics such as publication bias, cherry-picking, and the design and necessity of neutral comparison studies, and review recent positive developments towards more reliable empirical evidence in the context of methodological computational research.

About the Speaker

Prof. Dr. Anne-Laure Boulesteix

Anne-Laure Boulesteix © Anne-Laure Boulesteix

Bio: Anne-Laure Boulesteix obtained a diploma in engineering from the Ecole Centrale Paris, a diploma in mathematics from the University of Stuttgart (2001) and a PhD in statistics (2005) from the Ludwig Maximilian University (LMU) of Munich. After a postdoc phase in medical statistics, she joined the Medical School of the University of Munich as a junior professor (2009) and professor (2012). She is working at the interface between biostatistics, machine learning and medicine with a particular focus on metascience and evaluation of methods. She is a steering committee member of the STRATOS initiative, founding member of the LMU Open Science Center and president-elect of the German Region of the International Biometric Society.

Archiv

Past Events

Towards reliable empirical evidence in methodological computational research: recent developments and remaining challenges

Begin: End: Location: Zoom
Event type:
  • RC Trust
Anne-Laure Boulesteix © Anne-Laure Boulesteix

Prof. Dr. Anne-Laure Boulesteix from LMU Munich

Abstract: Statisticians are often keen to analyze the statistical aspects of the so-called “replication crisis”. They condemn fishing expeditions and publication bias across empirical scientific fields applying statistical methods. But what about good practice issues in their own - methodological - research, i.e. research considering statistical (or more generally, computational) methods as research objects? When developing and evaluating new statistical methods and data analysis tools, do statisticians and data scientists adhere to the good practice principles they promote in fields which apply statistics and data science? I argue that methodological researchers should make substantial efforts to address what may be called the replication crisis in the context of methodological research in statistics and data science, in particular by trying to avoid bias in comparison studies based on simulated or real data. I discuss topics such as publication bias, cherry-picking, and the design and necessity of neutral comparison studies, and review recent positive developments towards more reliable empirical evidence in the context of methodological computational research.

About the Speaker

Prof. Dr. Anne-Laure Boulesteix

Anne-Laure Boulesteix © Anne-Laure Boulesteix

Bio: Anne-Laure Boulesteix obtained a diploma in engineering from the Ecole Centrale Paris, a diploma in mathematics from the University of Stuttgart (2001) and a PhD in statistics (2005) from the Ludwig Maximilian University (LMU) of Munich. After a postdoc phase in medical statistics, she joined the Medical School of the University of Munich as a junior professor (2009) and professor (2012). She is working at the interface between biostatistics, machine learning and medicine with a particular focus on metascience and evaluation of methods. She is a steering committee member of the STRATOS initiative, founding member of the LMU Open Science Center and president-elect of the German Region of the International Biometric Society.

Towards reliable empirical evidence in methodological computational research: recent developments and remaining challenges

Begin: End: Location: Zoom
Event type:
  • RC Trust
Anne-Laure Boulesteix © Anne-Laure Boulesteix

Prof. Dr. Anne-Laure Boulesteix from LMU Munich

Abstract: Statisticians are often keen to analyze the statistical aspects of the so-called “replication crisis”. They condemn fishing expeditions and publication bias across empirical scientific fields applying statistical methods. But what about good practice issues in their own - methodological - research, i.e. research considering statistical (or more generally, computational) methods as research objects? When developing and evaluating new statistical methods and data analysis tools, do statisticians and data scientists adhere to the good practice principles they promote in fields which apply statistics and data science? I argue that methodological researchers should make substantial efforts to address what may be called the replication crisis in the context of methodological research in statistics and data science, in particular by trying to avoid bias in comparison studies based on simulated or real data. I discuss topics such as publication bias, cherry-picking, and the design and necessity of neutral comparison studies, and review recent positive developments towards more reliable empirical evidence in the context of methodological computational research.

About the Speaker

Prof. Dr. Anne-Laure Boulesteix

Anne-Laure Boulesteix © Anne-Laure Boulesteix

Bio: Anne-Laure Boulesteix obtained a diploma in engineering from the Ecole Centrale Paris, a diploma in mathematics from the University of Stuttgart (2001) and a PhD in statistics (2005) from the Ludwig Maximilian University (LMU) of Munich. After a postdoc phase in medical statistics, she joined the Medical School of the University of Munich as a junior professor (2009) and professor (2012). She is working at the interface between biostatistics, machine learning and medicine with a particular focus on metascience and evaluation of methods. She is a steering committee member of the STRATOS initiative, founding member of the LMU Open Science Center and president-elect of the German Region of the International Biometric Society.