The physiological processes in our bodies, both pathological and healthy, are extraordinarily varied. Therefore, accurately depicting this complexity requires a multi-faceted, data-driven, and comprehensive approach: A multimodal approach.
What are Electronic Health Records (‘EHRs’)?
EHRs are digital files accessible throughout the medical system, nationally and even internationally, that give care providers the health history of their patients. These records include both clinical data and complementary data. We call simple non-invasive measurements collected in a physician’s office clinical data. Complementary data are paraclinical measurements (e.g. blood, urine, and lab tests) and modality expert conclusions (e.g. histopathologist tumor characterizations, radiologist tumor size measurements, and genetic mutations).
What is multimodality?
Multimodality has developed over time. It refers to the analysis of data in all its different forms (or modalities). In medical research, it offers alternative data sources and a more in-depth way to inspect the body. As of today, multimodality consists of omics, imaging, electric current variations, and niche measurements. Patient risk and protective factors can be discovered via EHR measurements; however if you wish to gain a deeper insight into a disease and understand its specific traits, you must analyze multiple modalities. In this article we discuss four clear advantages of studying multimodalities: (i) new discoveries are hidden within modalities that have not yet been thoroughly analyzed; (ii) modalities enrich the information found in EHRs; (iii) some medical situations are multifaceted and only start to make sense with a multimodal, holistic approach; and (iv) contemporary ML is well adapted to multimodality.
Four types of modality:
- Firstly, omics: Genomics, epigenomics, transcriptomics, lipidomics, proteomics, metabolomics, and radiomics;
- Secondly, imaging: Microscope, X Rays, Magnetic Resonance Imaging (‘MRI’), ultrasound, nuclear and elastography based;
- Thirdly, electric current variation: Electrocardiogram (‘ECG’) for the heart and Electroencephalogram (‘EEG’) for the brain; and
- Finally, niche measurements: Cerebrospinal fluid analysis
1. New discoveries hide within modalities that have not yet been thoroughly analyzed
Researchers cannot easily organize data from modalities such as imaging into tables. Instead, experts interpret these images depending on their specific medical context. Then, they evaluate any known criteria and record the results and their conclusions in EHR reports. Recently, novel Machine Learning (‘ML’) methods have learned how to process previously inaccessible signal sources. This knowledge has opened up the analysis of a variety of omics information such as raw images (minimally processed images); time-series data (data collected at different time points); and high-dimensional data (data with multiple features).
In a recent paper published by Owkin in Nature Medicine, a Deep Learning algorithm was trained successfully on histopathology Whole Slide Images (‘WSIs’) of patients affected by malignant mesothelioma (‘MM’) to predict overall survival. MM is an aggressive cancer currently diagnosed based on histological criteria. The Owkin model (called MesoNet) outperformed current diagnostic scores for MM and found discriminative regions (biomarkers) within the tumor that indicated overall survival. These findings highlight that with ML’s help, we can discover hidden biomarkers from within modalities in their original forms, not in the filtered states recorded in EHRs.
2. Modalities enrich the information found in EHRs
Here, we will look at Coronary Artery Disease (‘CAD’). This example shows how leveraging the power of ML to analyze new modalities can enrich the information found in EHRs and improve patient care.
CAD is the most common type of heart disease and the leading cause of death worldwide. The current method to assess cardiac risk and guide clinical treatment is to look for obstructions in damaged vessels. The challenge? Despite significant advances, this method does not always predict coronary syndromes, resulting in a major residual cardiovascular risk for patients. One of the leading blind spots in the risk assessment appears to be vascular inflammation. However, this is a condition that we do not yet have a widely available and inexpensive way to detect.
The solution: An integrated ML and multimodal approach to identify new biomarkers of vascular inflammation. We know that inflammation of the vessel walls and the composition of its adipose (‘fat’) tissues are closely linked. A recent study by Dr. Evangelos Oikonomou from The University of Oxford published in the European Heart Journal, used ML techniques to identify a patient’s fat radiomic profile (‘FRP’). This biomarker monitors the changes in the vessels’ adipose tissue composition. By identifying this biomarker, ML is using radiology to offer another crucial tool to predict vascular inflammation, complementing the information found in EHRs and thus diminishing cardiovascular risk.
3. Some situations are inherently multifaceted and only start to make sense with a multimodal, holistic approach
We can see a great example of this in each episode of the fictional TV series House which is based on solving mysterious internal medicine cases. Internal medicine is ripe with systemic diseases that can affect multiple systems of the body simultaneously. In each episode, differential diagnosis is wild and the patient is often suspected to be affected by the worst incompatible conjurable atrocities. Then, hypotheses fly around the room. But, nobody agrees. So, tests and (often painful) modalities are collected. Finally, House collects diverse enough (multimodal) information (sometimes by breaking into the patient’s home) and the case is solved. Here, the multimodal impact is not incremental, but pivotal.
4. Contemporary machine learning adapts well to multimodality
As a technology, multimodality is an emerging field of ML that one can map into five broad but cooperating areas of taxonomy: representation, translation, alignment, fusion and co-learning. Therefore, a multimodal approach allows data scientists to integrate different and heterogeneous sources of data effectively.
A multimodal approach is completely foreign yet potentially superior to the statistical paradigm. For example, Tiulpin et al. compared a multimodal ML model to a statistical reference model to predict disease progression in knee osteoarthritis. If successful, this model could accelerate drug development in this disease and prevent millions of whole-joint replacement surgeries. His team found that by using a multimodal approach combining raw radiographic data, clinical examination results and the patient’s previous medical history, the ML model outperformed the statistical model by 4 points in predictivity.
We can see another example that illustrates the potential of combining ML and multimodality in Owkin’s recent Nature Communications publication. This paper explains how Owkin has developed a novel ML model to predict the tumor RNA-Seq expression from whole-slide histology images (‘WSI’s), thereby bypassing the expensive and time-consuming traditional RNA-Sequencing techniques. By detecting RNA-Seq information directly from the WSIs, we are effectively translating the histology (WSI) modality into a genomic (RNA-Seq) modality. We can take the analysis even further by using the newly determined RNA-Seq expression to predict the microsatellite instability (‘MSI’) status of the tumor directly from the WSI. Access to the RNA-Seq expression and the histology WSI revealed enrichment in T-cell activation and immune activation for MSI high patients. These findings align with the current literature and demonstrate the potential of biomarker discovery of such a multimodal approach.
In conclusion, as the analysis of new modalities produces novel biomarkers useful in mainstream clinical practice, EHRs will expand to include these modalities to offer the clinician a fuller picture of a patient’s medical history. Furthermore, the introduction of ML tools provides an opportunity to integrate multiple modalities in an all-encompassing analysis framework seamlessly. ML’s integration with medical researchers who use their expertise to analyze new modalities provides a more comprehensive picture than that provided solely by EHRs.
At Owkin, we have built a diverse Lab team that can foster an integrated multimodal process to unlock new discoveries.
- Pierre C. et al., Deep Learning-Based Classification of Mesothelioma Improves Prediction of Patient Outcome, Nature Medicine, 25(10), 1519-1525 (2019).
- Carlos MI. et al., Image-Based Cardiac Diagnosis With Machine Learning: a Review, Frontiers in Cardiovascular Medicine, 7(nil), nil (2020).
- Evangelos KO., A Novel Machine Learning-Derived Radiotranscriptomic Signature of Perivascular Fat Improves Cardiac Risk Prediction Using Coronary Ct Angiography, European Heart Journal, 40(43), 3529-3543 (2019).
- Isabelle B., The Emif-Ad Multimodal Biomarker Discovery Study: Design, Methods and Cohort Characteristics, Alzheimer’s Research & Therapy, 10(1), 64 (2018).
- Baltrušaitis, Tadas, Chaitanya Ahuja, and Louis-Philippe Morency. “Multimodal machine learning: A survey and taxonomy.” IEEE transactions on pattern analysis and machine intelligence 41, no. 2 (2018): 423-443.
- Tiulpin, Aleksei, Stefan Klein, Sita MA Bierma-Zeinstra, Jérôme Thevenot, Esa Rahtu, Joyce van Meurs, Edwin HG Oei, and Simo Saarakkala. “Multimodal machine learning-based knee osteoarthritis progression prediction from plain radiographs and clinical data.” Scientific reports 9, no. 1 (2019): 1-11.
- Schmauch, Benoît, Alberto Romagnoni, Elodie Pronier, Charlie Saillard, Pascale Maillé, Julien Calderaro, Aurélie Kamoun et al. “A deep learning model to predict RNA-Seq expression of tumours from whole slide images.” Nature communications 11, no. 1 (2020): 1-15.