TrustKG: Reliable AI for medical decision-making

L3S Best Publication of the Quarter (Q4/2024 – Q1/2025)
Category: Knowledge Graphs and Interpretable Hybrid AI 

Toward Interpretable Hybrid AI: Integrating Knowledge Graphs and Symbolic Reasoning in Medicine 

Authors: Yashrajsinh Chudasama, Hao Huang, Disha Purohit, Maria-Esther Vidal.  

Published in: IEEE Access

The paper in a nutshell: 

Artificial intelligence (AI) is transforming healthcare, but many AI systems operate as ‘black boxes,’ meaning they make decisions without clearly explaining how or why. In medicine, trust and transparency are essential, as users (i.e., clinicians, patients, and health practitioners) need to understand the reasoning behind AI-driven recommendations. Our research introduces TrustKG, neuro-symbolic systems that combine knowledge graphs (KGs) − which structure medical knowledge − with symbolic reasoning and machine learning to create more interpretable and reliable AI models. We apply the TrusKG neuro-symbolic systems to lung cancer, helping predict disease progression and treatment effects through link prediction (finding hidden connections in medical data) and counterfactual reasoning (exploring “what-if” treatment scenarios). Our results show that neuro-symbolic approaches can enhance medical decision-making, providing doctors with clearer, and evidence-based insights. 

Which problem do you solve with your research? 

Our research addresses the problem of AI transparency in medicine. Many AI models used in healthcare today can make accurate predictions but lack explainability, making it difficult for doctors to trust and validate AI-driven recommendations. Our approach, TrustKG, is a neuro-symbolic system that integrates structured medical knowledge- represented as KGs- with data-driven approaches, and symbolic reasoning and learning. This approach ensures that medical AI systems provide not only accurate predictions but also human-understandable explanations, improving trust and reliability in clinical decision-making. 

What is new about your research? 

Unlike traditional AI models, which only learn from data, our research introduces a neuro-symbolic AI approach that combines knowledge graphs, symbolic reasoning, and machine learning. This enables AI to understand medical relationships, predict hidden links in patient data, and simulate alternative treatment outcomes using counterfactual reasoning. By fusing structured medical knowledge with AI learning, we create AI systems that are both more explainable and more adaptable to complex healthcare challenges. 

What is the potential impact of your findings? 

Our findings have the potential to improve patient outcomes by helping doctors make better, evidence-based decisions. By providing transparent, interpretable AI recommendations, TrustKG can enhance diagnostic accuracy, treatment planning, and risk prediction in medicine. Additionally, this approach bridges the gap between AI developers and medical professionals, fostering greater trust in AI-driven healthcare solutions. Beyond medicine, neuro-symbolic AI techniques can be applied to other fields where explainability is crucial, such as finance, law, and policy-making, ensuring that AI-driven decisions remain transparent, fair, and accountable. 

Paper link: ieeexplore.ieee.org/document/10839382