030093 Explainable Artificial Intelligence (Wiese)

Event Timeslots (1)

Wednesday
-
This course deals with philosophical issues surrounding the transparency and accountability of artificial intelligence (AI) systems. Traditional AI is typically transparent; its algorithms are programmed to follow specific strategies, making their performance understandable to the programmers. In contrast, contemporary AI, often based on machine learning and large datasets, operates in a more opaque manner. The complexity of these systems means that while programmers understand how the algorithms work, they often cannot fully explain how an AI achieves successful outcomes or cannot predict the conditions under which it might fail. Put differently, there is – at least in many contexts – a trade-off between accuracy and interpretability. Explainable AI (XAI) aims to alleviate this problem by providing insights into the functioning of current AI systems. This includes understanding successes and failures of AIs, which is crucial to assessing their reliability and trustworthiness. However, the concepts of explainable, interpretable, and trustworthy AI are themselves philosophically complex and ambiguous. This seminar offers an overview of philosophical challenges related to XAI. It provides some insights into contemporary approaches to enhancing AI transparency, interpretability, and trustworthiness, fostering a critical understanding of these efforts. As a result, students will be able to critically discuss current approaches in AI development, as well as in AI ethics and governance.