Explainable Intelligent Systems
The Project Explainable Intelligent Systems investigates how explainability of artificial intelligent systems contributes to the fulfillment of important societal desidarata like responsible decision-making, the trustworthiness of AI, and many more.
Artificial intelligent systems increasingly augment or take over tasks previously performed by humans. This concerns both low-stakes tasks, such as recommending books or movies and high-stakes tasks, such as suggesting which applicant to give a job, what medical treatment to give a patient, or how to navigate autonomous cars through heavy traffic. In such situations, a variety of moral, legal, and general societal challenges are raised. To answer these challenges, it is often claimed, we need to ensure that artificially intelligent systems deliver reliable, trustworthy, and understandable explanations for their decisions. But how can this be achieved?
Explainable Intelligent Systems (EIS) is a highly interdisciplinary and innovative research project based at Saarland University and TU Dortmund University, Germany. It brings together experts from informatics, law, philosophy and psychology. Together, we investigate how intelligent systems can and should be designed in order to provide explainable recommendations and thus meet important societal desiderata.
Contact
EIS can be contacted via the chair of Theoretical Philosophy at TU Dortmund University.
Contact Information
Prof. Dr. Eva Schmidt
TU Dortmund University
Emil-Figge-Str. 50, Room 2.211
D-44227 Dortmund
Phone: +49 (0)231 755-2835
Mail: eva.schmidttu-dortmundde