Title:
Algorithmic Contestability: Overturning Decisions on Epistemic Grounds
Abstract:
Machine learning systems increasingly make life-changing decisions about individuals, such as loan approvals, hiring, cheating detection, raising a pressing question: how can individuals respond to negative decisions made by these opaque systems? While explainable artificial intelligence (XAI) has largely focused on algorithmic recourse—helping individuals change their features to obtain a desired outcome—the parallel problem of algorithmic contestability—helping individuals review and correct erroneous algorithmic decisions—has received far less attention, despite its central ethical and legal importance. We trace this neglect to the absence of clear formal definitions and a systematic operationalization of contestability as an algorithmic problem. To address this gap, we conceptualize contestability as the counterpart to recourse: whereas recourse assumes the decision is correct and shows how to change it, contestability begins from the belief that the decision is wrong and seeks evidence to overturn it. We prove that XAI explanations can reveal violations of epistemic norms in a model’s reasoning, but that such violations alone do not justify reversing decisions. Going beyond traditional XAI, we identify three types of evidence warranting reversal according to the decision maker’s own ethical standards: predictive multiplicity, incorrect feature values, and neglected overruling evidence. We argue that these render decisions normatively indefensible and thus successfully contestable. Finally, we analyze how existing EU legislation connects to our framework and argue that individuals already hold some legal rights to these forms of evidence.