Philosophy of machine learning at the MCMP

We investigate machine learning and artificial intelligence in the tradition of scientific philosophy embodied by the MCMP.

Our research centers on epistemological questions in machine learning. We are particularly interested in the reliability and interpretability of machine learning systems, as well as their role in scientific reasoning. To address these topics, we draw on the rich philosophical traditions of explanation, induction, and scientific modeling.

We take a formal and interdisciplinary approach. Our work engages with the mathematical foundations of machine learning, incorporates computational methods, and applies tools from formal epistemology to explore foundational questions. Consequently, collaboration is central to our research—we work closely with scientists, especially machine learning researchers. We are actively involved in several research initiatives, including the Munich Center for Machine Learning (MCML), the Konrad Zuse School of Excellence in Reliable AI (relAI), and the AI-HUB@LMU.

We offer a unique range of advanced Master's-level courses that combine philosophical depth with technical rigor. These include seminars on the philosophy of statistics, the philosophy of natural language processing, and the mathematical foundations of machine learning (See below for a list of our regular course offerings). Our goal is to equip students with both the conceptual and technical tools necessary to critically engage with the field.

People

  • Timo Freiesleben is a postdoc in Tom Sterkenburg's Emmy Noether group. His research addresses central topics in the philosophy of machine learning, including explainable AI, robustness, and the epistemic role of machine learning in science. His current work investigates benchmarking as a dominant epistemology in machine learning.
  • Levin Hornischer is assistant professor at the Chair of Logic and Philosophy of Language. He is working on the mathematical and philosophical foundations of artificial intelligence. For example, he (a) used programming semantics to describe the behavior of neural networks, (b) he used the axiomatic theory of social choice to analyze how neural networks aggregate preferences, and (c) he used the language of reasons as an interpretability method of neural networks.
  • Luis Lopez is a postdoctoral fellow at the Chair of Philosophy of Science. He works on philosophical questions concerning machine learning models in scientific research, with a focus on the representational capacities of these models and their implications for scientific understanding.
  • Ignacio Ojea Quintana is assistant professor with the Chair of Philosophy of Science. His work focuses on modeling social processes using multi-agent reinforcement learning and data analysis. He also uses natural language processing and network science techniques to study online behavior. His current research focuses on how social media and artificial agents transform social norms.
  • Katia Parshina is a doctoral fellow in Tom Sterkenburg's Emmy Noether group. Her research is centered around epistemology of machine learning, focusing on different philosophical approaches to bias. Her current project focuses on clarifying the definition and function of inductive bias in machine learning, given modern approaches to bias in philosophy.
  • Tom Sterkenburg leads the Emmy Noether junior research group "From Bias to Knowledge: The Epistemology of Machine Learning." He works on the application of the mathematical theory of machine learning to philosophical questions about inductive inference. The Emmy Noether project is concerned with clarifying the notion of inductive bias.

Associated members

Visitors

Activities

Reading group

We meet every couple of weeks to discuss a (recent) paper in the philosophy of machine learning, with a focus on epistemological themes. The meetings are in hybrid format. Find more information here.

Events

Teaching

The following are courses we provide on a regular basis. To find out what is on offer in the current or upcoming semester, use the course catalogue (you can search by course title or instructor name).

Selected publications

  • Freiesleben et al. (2024) "Scientific inference with interpretable machine learning: Analyzing models to learn about real-world phenomena." Minds and Machines.
  • Hornischer & Terzopoulou (2025) “Learning how to vote with principles: Axiomatic insights into the collective decisions of neural networks.” Journal of Artificial Intelligence Research.
  • Ojea Quintana et. al. (2022) “Polarization and trust in the evolution of vaccine discourse on Twitter during COVID-19”, PLOS One.
  • Sterkenburg & Grünwald (2021), "The no-free-lunch theorems of supervised learning." Synthese.