You

How healthcare professionals are essential to the responsible use of AI.

User-centred design – made for you, the user – is a central feature of responsible, or ethical AI for healthcare. That’s because the best and most successful AI algorithms deployed in clinical practice are those that are designed with and for healthcare professionals. Healthcare professionals are best positioned to know what are useful clinical problems to ‘solve’ using AI, what features (or variables) are most important to include in clinical algorithms, how to ensure the outputs of algorithms are helpful in a clinical situation, and how to embed clinical algorithms into clinical pathways. 

Healthcare professionals should be involved in all stages of design from data creation (i.e., recording information in an electronic health record that may be used to train an AI algorithm), ideation (i.e., coming up with the problem statement), to data selection, model selection, training, validation, evaluation, and implementation. They should also be heavily involved in user testing. It’s not good enough to have a highly accurate algorithm that no clinician wants to use when caring for their patients because the user interface is too complicated, or the data entry process is too cumbersome.

Healthcare professionals also play a crucial role in the use of AI in healthcare by understanding its use and being aware of its limitations. For example, healthcare professionals can prevent harm coming to patients from the use of AI algorithms by being aware of ‘automation bias.’ Automation bias is where humans assume that computers are always right and so take any information provided by a computer at face value. Clinicians and those engaged with AI should feel comfortable and confident enough to review the outputs of any AI algorithm used in their practice and to question it if they do not think it looks correct or if it feels ‘off.’ 

Healthcare professionals can also help to continue to refine AI algorithms once they have gone live in clinical systems by providing regular feedback to the development team on what’s working well and what isn’t. In this way, AI algorithms and healthcare professionals should be seen as a team, not as two separate entities trying to compete against each other. 

Participation of this nature in the design, development, and implementation of AI algorithms doesn’t require healthcare professionals to become qualified data scientists. However, communication between healthcare professionals and technical professionals will be smoother if the two groups ‘speak the same language.’ It is, therefore, helpful for healthcare professionals to consider training in basic statistics and the fundamentals of data science, and helpful for data scientists to consider training in the basics of epidemiology (the study of disease). 

User-centred design – made for you, the user – is a central feature of responsible, or ethical AI for healthcare. That’s because the best, most successful, algorithms designed for use in the healthcare system are those that are designed with and for healthcare professionals. Healthcare professionals are best positioned to know what are useful clinical problems to ‘solve’ using AI, what features (or variables) are most important to include in clinical algorithms, how to ensure the outputs of algorithms are helpful in a clinical situation, and how to embed clinical algorithms into clinical pathways. 

Healthcare professionals should be involved in all stages of design from data creation (i.e., recording information in an electronic health record that may be used to train an AI algorithm), ideation (i.e., coming up with the problem statement), to data selection, model selection, training, validation, evaluation, and implementation. They should also be heavily involved in user testing. It’s not good enough to have a highly accurate algorithm that no clinician wants to use when caring for their patients because the user interface is too complicated, or the data entry process is too cumbersome. 

Healthcare professionals can also play a crucial role in the use of AI in healthcare by understanding its use and being aware of its limitations. For example, healthcare professionals can prevent harm coming to patients from the use of AI algorithms by being aware of ‘automation bias.’ Automation bias is where humans, in this instance healthcare professionals, assume that computers are always right and so take any information provided by a computer at face value. Clinicians or whoever is using them should feel comfortable and confident enough to review the outputs of any AI algorithm used in their practice and to question them if they do not think they look correct or if they feel ‘off.’ 

Healthcare professionals can also help to continue to refine AI algorithms once they have gone live in clinical systems by providing regular feedback to the development team on what is working well and what is not. In this way, the AI algorithms and the healthcare professional should be seen as a team, not as two separate entities trying to compete against each other.

It is also worth developers of algorithms intended to be used in healthcare being aware of the factors that increase the willingness of healthcare professionals to adopt AI. 

Healthcare practitioners are willing to adopt AI when the model: 

  • is of clear clinical value 
  • is accompanied by a user-friendly and clinician-centric interface
  • has a decision-making process that is sufficiently transparent/explainable 
  • has been independently validated and both its potential and limitations have been made clear
  • allows for contextualisation and clinical judgement 
  • clearly improves patient outcomes
  • has had adequate training on how to use it. 

Of course, it is not just the acceptance of AI by healthcare practitioners that matters. It is also important that patients and publics are willing for AI to be used in the planning and execution of their care. Less is known about patient and public attitudes. However, in general, there seems to be a belief that the potential benefits of AI outweigh the risks, when it is believed that AI is capable of improving the process of diagnosis and treatment management.

This is, however, neither a unilaterally held nor unconditional belief. Some patients, for example, worry about ‘uniqueness neglect’, fearing that AI models may ignore their unique characteristics and circumstances, resulting in poorer quality care and worse outcomes, while others will only accept the use of AI in a purely supportive function. It is, therefore, also important that healthcare practitioners are aware of these concerns and are willing to take a hands-on role in improving the public awareness and understanding of AI in healthcare, as this is likely to increase the overall social acceptability. .

An Owkin example

Owkin worked hand in hand with doctors at multiple hospitals in the HealthChain project, culminating in a publication in Nature Medicine early 2023. Before starting any machine learning, we aligned on the scientific question and patient population to ensure that the project was feasible and could be generalizable to real patients. The PIs defined the right inclusion criteria for which patients were to be part of the study, which is something only biomedical experts can accomplish due to their expertise in dealing with the target population. 

Once the data was collected, in addition to data science-led data quality tests, doctors also visualised histograms to perform data checks, ensuring the values fell within what they knew to be acceptable based on their expertise. If there were any anomalies discovered, these could be flagged to data scientists before starting to train algorithms on the data. Although Owkin provided the hospitals with the tools they needed to bridge technical and data engineering gaps, we found that successful applications of AI in the real world require a deep understanding of clinical topics and strong collaboration between all parties involved. 

Further reading
  • Aggarwal, Ravi et al. 2021. ‘Patient Perceptions on Data Sharing and Applying Artificial Intelligence to Health Care Data: Cross-Sectional Survey’. Journal of Medical Internet Research 23(8): e26162.
  • Choudhury, Avishek. 2022. ‘Factors Influencing Clinicians’ Willingness to Use an AI-Based Clinical Decision Support System’. Frontiers in Digital Health 4: 920662.
  • Crigger, Elliott et al. 2022. ‘Trustworthy Augmented Intelligence in Health Care’. Journal of Medical Systems 46(2): 12.
  • Esmaeilzadeh, Pouyan. 2020. ‘Use of AI-Based Tools for Healthcare Purposes: A Survey Study from Consumers’ Perspectives’. BMC Medical Informatics and Decision Making 20(1): 170.
  • Longoni, Chiara, Andrea Bonezzi, and Carey K Morewedge. 2019. ‘Resistance to Medical Artificial Intelligence’. Journal of Consumer Research46(4): 629–50.
  • Nitiéma, Pascal. 2023. ‘Artificial Intelligence in Medicine: Text Mining of Health Care Workers’ Opinions’. Journal of Medical Internet Research 25: e41138.
  • Wu, Chenxi et al. 2023. ‘Public Perceptions on the Application of Artificial Intelligence in Healthcare: A Qualitative Meta-Synthesis’. BMJ Open13(1): e066322.
  • Yoo, Junsang, Sujeong Hur, Wonil Hwang, and Won Chul Cha. 2023. ‘Healthcare Professionals’ Expectations of Medical Artificial Intelligence and Strategies for Its Clinical Implementation: A Qualitative Study’. Healthcare Informatics Research 29(1): 64–74.