To content
Lecture Series

AI Colloquium

The AI Colloquium is a series of lectures dedicated to cutting-edge research in the field of machine learning and artificial intelligence, coorganized by the Lamarr Institute for Machine Learning and Artificial Intelligence (Lamarr Institute), the Research Center Trustworthy Data Science and Security (RC Trust), and the Center for Data Science & Simulation at TU Dortmund University (DoDas).

Programme

Distinguished researchers deliver captivating lectures followed by vibrant discussions. However, unlike traditional colloquia, the AI Colloquium prioritizes interactive dialogue, fostering international collaboration. Conducted primarily in English, these 90-minute sessions feature hour-long lectures and 30-minute Q&A sessions. Join every Thursday at 10 AM c.t. for a stimulating exploration of cutting-edge topics. Whether in-person at our Lecture Room on Fraunhofer Strasse 25 or via Zoom, our hybrid format ensures accessibility for all.

Day (usually) Thursday
Start and end time 10 AM c.t. - 12 AM
Duration of Presentation 60 Minutes
Location (usually) Lecture Room 303
3. Floor
Fraunhofer Strasse 25
Dortmund

Upcomming Events

Will Embodied AI Become Sentient?

Start: End: Location: HG1/HS6 (August-Schmidt-Str. 4)
Event type:
  • Embodied AI
  • RC Trust
Profil Picture of Edward Lee © Rusi Mchedlishvili
Prof. Edward A. Lee, Division of Electrical Engineering (EECS), UC Berkeley

Abstract - Today's large language models have relatively limited interaction with the physical world. They interact with humans through the Internet, but even this interaction is limited for safety reasons. According to psychological theories of embodied cognition, they therefore lack essential capabilities that lead to a cognitive mind. But this is changing.  The nascent field of embodied robotics looks at properties that emerge when deep neural networks can sense and act in their physical environment. In this talk, I will examine fundamental changes that occur with the introduction of feedback through the physical world, when robots can not only sense to act, but also act to sense.  Processes that require subjective involvement, not just objective observation, become possible.  Using theories developed by Turing Award winner Judea Pearl, I will show that subjective involvement enables reasoning about causation, and therefore elevates robots to the point that it may become reasonable to hold them accountable for their actions.  Using theories developed by Turing Award winners Shafi Goldwasser and Silvio Micali, I will show that knowledge can be purely subjective, not externally observable.  Using theories developed by Turing Award winner Robin Milner, I will show that first-person interaction can gain knowledge that no objective observation can gain. Putting all these together, I conclude that embodied robots may in fact become sentient, but also that we can never know for sure whether this has happened.

Bio: Edward A. Lee has been working on embedded software systems for more than 40 years. After studying and working at Yale, MIT, and Bell Labs, he landed at Berkeley, where he is now Professor of the Graduate School in EECS. He is co-founder of Xronos Inc. and  BDTI, Inc. He is a founder of the open-source software projects Lingua Franca and Ptolemy and is a coauthor of textbooks on embedded systems, signals and systems, digital communications, and philosophical and social implications of technology. His current research is focused on the Lingua Franca polyglot coordination language for distributed cyber-physical systems.  More details can be found at https://ptolemy.berkeley.edu/~eal/biog.html.

Profil Picture of Edward Lee © Rusi Mchedlishvili
Edward A. Lee @ AISoLA 2024

Apertizer

Watch Edward A. Lee's talk about "Certainty vs. Intelligence" as apertizer.

Archiv

Past Events

Will Embodied AI Become Sentient?

Start: End: Location: HG1/HS6 (August-Schmidt-Str. 4)
Event type:
  • Embodied AI
  • RC Trust
Profil Picture of Edward Lee © Rusi Mchedlishvili
Prof. Edward A. Lee, Division of Electrical Engineering (EECS), UC Berkeley

Abstract - Today's large language models have relatively limited interaction with the physical world. They interact with humans through the Internet, but even this interaction is limited for safety reasons. According to psychological theories of embodied cognition, they therefore lack essential capabilities that lead to a cognitive mind. But this is changing.  The nascent field of embodied robotics looks at properties that emerge when deep neural networks can sense and act in their physical environment. In this talk, I will examine fundamental changes that occur with the introduction of feedback through the physical world, when robots can not only sense to act, but also act to sense.  Processes that require subjective involvement, not just objective observation, become possible.  Using theories developed by Turing Award winner Judea Pearl, I will show that subjective involvement enables reasoning about causation, and therefore elevates robots to the point that it may become reasonable to hold them accountable for their actions.  Using theories developed by Turing Award winners Shafi Goldwasser and Silvio Micali, I will show that knowledge can be purely subjective, not externally observable.  Using theories developed by Turing Award winner Robin Milner, I will show that first-person interaction can gain knowledge that no objective observation can gain. Putting all these together, I conclude that embodied robots may in fact become sentient, but also that we can never know for sure whether this has happened.

Bio: Edward A. Lee has been working on embedded software systems for more than 40 years. After studying and working at Yale, MIT, and Bell Labs, he landed at Berkeley, where he is now Professor of the Graduate School in EECS. He is co-founder of Xronos Inc. and  BDTI, Inc. He is a founder of the open-source software projects Lingua Franca and Ptolemy and is a coauthor of textbooks on embedded systems, signals and systems, digital communications, and philosophical and social implications of technology. His current research is focused on the Lingua Franca polyglot coordination language for distributed cyber-physical systems.  More details can be found at https://ptolemy.berkeley.edu/~eal/biog.html.

Profil Picture of Edward Lee © Rusi Mchedlishvili
Edward A. Lee @ AISoLA 2024

Apertizer

Watch Edward A. Lee's talk about "Certainty vs. Intelligence" as apertizer.

Will Embodied AI Become Sentient?

Start: End: Location: HG1/HS6 (August-Schmidt-Str. 4)
Event type:
  • Embodied AI
  • RC Trust
Profil Picture of Edward Lee © Rusi Mchedlishvili
Prof. Edward A. Lee, Division of Electrical Engineering (EECS), UC Berkeley

Abstract - Today's large language models have relatively limited interaction with the physical world. They interact with humans through the Internet, but even this interaction is limited for safety reasons. According to psychological theories of embodied cognition, they therefore lack essential capabilities that lead to a cognitive mind. But this is changing.  The nascent field of embodied robotics looks at properties that emerge when deep neural networks can sense and act in their physical environment. In this talk, I will examine fundamental changes that occur with the introduction of feedback through the physical world, when robots can not only sense to act, but also act to sense.  Processes that require subjective involvement, not just objective observation, become possible.  Using theories developed by Turing Award winner Judea Pearl, I will show that subjective involvement enables reasoning about causation, and therefore elevates robots to the point that it may become reasonable to hold them accountable for their actions.  Using theories developed by Turing Award winners Shafi Goldwasser and Silvio Micali, I will show that knowledge can be purely subjective, not externally observable.  Using theories developed by Turing Award winner Robin Milner, I will show that first-person interaction can gain knowledge that no objective observation can gain. Putting all these together, I conclude that embodied robots may in fact become sentient, but also that we can never know for sure whether this has happened.

Bio: Edward A. Lee has been working on embedded software systems for more than 40 years. After studying and working at Yale, MIT, and Bell Labs, he landed at Berkeley, where he is now Professor of the Graduate School in EECS. He is co-founder of Xronos Inc. and  BDTI, Inc. He is a founder of the open-source software projects Lingua Franca and Ptolemy and is a coauthor of textbooks on embedded systems, signals and systems, digital communications, and philosophical and social implications of technology. His current research is focused on the Lingua Franca polyglot coordination language for distributed cyber-physical systems.  More details can be found at https://ptolemy.berkeley.edu/~eal/biog.html.

Profil Picture of Edward Lee © Rusi Mchedlishvili
Edward A. Lee @ AISoLA 2024

Apertizer

Watch Edward A. Lee's talk about "Certainty vs. Intelligence" as apertizer.