Making AI Image Recognition Transparent

L3S Best Publication of the Quarter (Q4/2024 – Q1/2025) 
Category: Explainable AI

QPM: Discrete Optimization for Globally Interpretable Image Classification

Authors: Thomas Norrenbrock, Timo Kaiser, Sovan Biswas, Ramesh Manuvinakurike, Bodo Rosenhahn 

Presented at: The Thirteenth International Conference on Learning Representations (ICLR) 2025 

The paper in a nutshell: 

Our work introduces QPM, a new approach that makes AI image recognition more transparent and predictable. QPM highlights the learnt differences between image categories, revealing the key features that distinguish them, even down to the eye color of a bird. This enables a deeper understanding of how the system arrives at its classifications and improves the predictability of its behavior, all while keeping the model accurate and reliable. 

Which problem do you solve with your research? 

While existing methods can explain individual AI decisions, understanding the overall reasoning and predicting the behavior of an AI image recognition system is a challenge. QPM alleviates this problem by offering global explanations, making the AI’s behavior more predictable and allowing potential issues to be identified and addressed before deployment, leading to safer applications. 

What is new about your research? 

QPM introduces a new approach to global interpretability in AI image recognition. It goes beyond explaining individual decisions to reveal how different image categories are related and distinguished by the AI, based on a concise set of learned features. 

What is the potential impact of your findings? 

The potential impact encompasses increased trust, enhanced safety, and expanded applications for AI image recognition. Moreover, our work aligns with emerging regulations like the EU AI Act, which may mandate greater transparency in AI. QPM is particularly well-suited for problems involving related categories, a common scenario in many real-world applications. 

Paper link: arxiv.org/abs/2502.20130