XAI

A field of research that aims to make AI algorithms easier to understand.

XAI, or Explainable Artificial Intelligence refers to a specific field of research within Artificial Intelligence that aims to make AI algorithms and the ways in which they reason, easier to understand. In other words, XAI tries to make black box algorithms more ‘transparent’ so that humans can understand how the algorithm is making ‘decisions’ (such as a classification or a prediction). 

In healthcare, it’s very important that all algorithms are, as far as possible, explainable so that: 

  • Clinicians and patients can understand why a patient has been given a specific diagnosis by an algorithm. 
  • Clinicians can question any diagnoses or treatment recommendations given by an algorithm if they think the algorithm made the ‘decision’ based on flawed reasoning. 
  • If something goes wrong with a patient’s care, the ‘source’ of the error can be identified and fixed. 
  • Problems such as bias can be identified.

For these reasons, and others, ‘explainability’ is key to building clinician and patient trust in AI. It is also a primary focus of the responsible AI community.

XAI researchers have developed a very long list of processes and techniques that can be used to make specific AI algorithms more explainable, which broadly fall into two categories:

Model-dependent techniquesOnly work for certain types of algorithms. Examples include neural network visualization, which demonstrates how the neurons are mapped in different layers and offers insights into how it processes data. Or XGBoost feature (e.g. age, gender, visual patterns, timestamps, etc.) importance scores, which indicate the contribution of each feature to the model’s prediction - with higher scores having more impact.
Model-agnostic techniquesCan work for all types of algorithms. Examples include partial dependence plots, which visualize the relationship between a specific feature and the model’s predictions. Or SHAP (SHapely Additive exPlanations) values, which provide a unified understanding of which features are most important and break down the overall prediction into the contributions of each feature, providing insights into why a model leaned toward a particular outcome.

The ways in which XAI techniques try to make AI algorithms more explainable vary. Some aim to ‘simplify’ the algorithm, while others provide a visual explanation, for example highlighting which features in an image were deemed to be the most important when the algorithm was deciding how to classify the image. There is no agreed ‘best’ technique, it depends entirely on the specific algorithm in question and what it’s designed to do.

XAI, or Explainable Artificial Intelligence refers to a specific field of research within Artificial Intelligence that aims to make AI algorithms, and the ways in which they reason, easier to understand. In other words, XAI is the field of research focused on  the principle of explainability in AI. 

Explainability is an important principle in responsible AI in general. However, it is especially important in safety-critical fields such as healthcare when there are both legal (autonomy/justice) justifications linked to the value of informed consent; and medical (beneficence/non-maleficence) justifications linked to the importance of detecting errors (e.g., incidents of spurious correlation being confused with causality) that might lead to direct harm via misdiagnosed or missed diagnosis, or indirect harm via overdiagnosis.

There are two different ways to ‘operationalise’ – or put this principle into practice – in the context of machine-learning.

  1. Only white-box algorithms which are explainable or transparent by design. This is the ‘ideal’ method of operationalising explainability (sometimes referred to as ante-hoc), but it is not always practical as white-box algorithms have limitations in terms of accuracy, and in terms of the complexity they are capable of handling. 
  2. Applying post-hoc explanations to black-box algorithms (such as artificial neural networks) to make their ‘decisions’ easier to understand and interpret by people. This is where the focus of XAI researchers lies.

There are multiple different types of post-hoc explainability techniques that have been developed by the XAI community to provide different sorts of explanations for how algorithms work, such as:

  • Model Agnostic Explanations
    Designed to be applied to any machine learning algorithm. 
  • Model Specific Explanations
    Designed to work with only one type or class of algorithm. 
  • Global Explanations
    Designed to provide a general and relatively high-level explanation of a model’s overarching behaviour and reasoning. These techniques are relatively generalisable but might not provide the level of specificity required by some people who really want to understand exactly how a model is making a decision in terms of what variables it is giving more weight to or similar. 
  • Local Explanations
    Designed to provide the level of specificity that is missing from global explanations. Local explanation techniques aim to make the reasoning behind a single prediction clear enough that it could be used to defend any ‘decision’ made by the model in question.

Of the many available XAI techniques, most used include:

  • Shapley Additive exPlanations (SHAP)
    This is an XAI technique designed to highlight the contribution of individual model attributes (e.g., biomarkers or clinical characteristics) to a specific outcome.
  • Gradient-weighted Class Activation Mapping
    This is an XAI technique typically used to ‘explain’ image classifications made by convolutional neural networks, that produces a heatmap highlighting the areas on an image that the model considers to be the most important in making its final decision. 
  • Local Interpretable Model-Agnostic Explanations (LIME)
    This is an XAI technique that aims to explain which model features are the most relevant for the output by deliberately changing the model’s input data and observing how the prediction changes in response.
An Owkin example

In a paper published in Hepatology, Owkin scientists used two deep learning algorithms based on whole slide digitized histopathology slides (WSIs) to build models for predicting survival of patients with hepatocellular carcinoma (HCC). First, slides were divided into small squares (‘tiles’) and features were extracted with a pretrained convolutional neural network. Then two algorithms were leveraged - SCMOWDER uses an attention mechanism on tumoral areas annotated by a pathologist whereas CHOWDER looks for features in an unsupervised manner (not requiring human expertise). 

Both of these algorithms outperformed a composite score and had a higher discriminatory power than a score incorporating all baseline variables associated with survival. The results clearly explain how the model demonstrates high survival versus low survival outcomes, as pathological review showed that the tumoral areas most predictive of poor survival were characterized by vascular spaces, the microtrabecular architectural pattern, and a lack of immune filtration - all of which could be picked up by AI analysis of the WSIs. 

Further reading
  • Amann, Julia et al. 2020. ‘Explainability for Artificial Intelligence in Healthcare: A Multidisciplinary Perspective’. BMC Medical Informatics and Decision Making 20(1): 310.
  • Bienefeld, Nadine et al. 2023. ‘Solving the Explainable AI Conundrum by Bridging Clinicians’ Needs and Developers’ Goals’. npj Digital Medicine 6(1): 94.
  •  Loftus TJ,Tighe PJ,Ozrazgat-Bas lantiT, Davis JP,Ruppert MM,RenY,etal.(2022) Ideal algorithms in healthcare :Explainable,dynam ic, precise, autonomous,fair,andreproducibl e.PLOS DigitHealth 1(1): e0000006. https://doi.org/ 10.1371/journal.pdig.0000006
  •  Loh, Hui Wen et al. 2022. ‘Application of Explainable Artificial Intelligence for Healthcare: A Systematic Review of the Last Decade (2011–2022)’. Computer Methods and Programs in Biomedicine 226: 107161
  •  Praveen, Sheeba, and Kapil Joshi. 2023. ‘Explainable Artificial Intelligence in Health Care: How XAI Improves User Trust in High-Risk Decisions’. In Explainable Edge AI: A Futuristic Computing Perspective, Studies in Computational Intelligence, eds. Aboul Ella Hassanien, Deepak Gupta, Anuj Kumar Singh, and Ankit Garg. Cham: Springer International Publishing, 89–99. https://link.springer.com/10.1007/978-3-031-18292-1_6 (December 17, 2022).
  • Price, W. Nicholson. 2018. ‘Big Data and Black-Box Medical Algorithms’. Science Translational Medicine 10(471): eaao5333.
  • Rajabi, E.; Kafaie, S. Knowledge Graphs and Explainable AI in Healthcare. Information 2022, 13, 459. https://doi.org/10.3390/ info13100459
  • Upol Ehsan, Koustuv Saha, Munmun De Choudhury, and Mark O. Riedl. 2023. Charting the Sociotechnical Gap in Explainable AI: A Framework to Address the Gap in XAI. Proc. ACM Hum.-Comput. Interact. 7, CSCW1, Article 34 (April 2023), 32 pages. https://doi.org/10.1145/3579467
  • Watson, David S et al. 2019. ‘Clinical Applications of Machine Learning Algorithms: Beyond the Black Box’. BMJ: l886.