Photo Credit: Sasha85ru
“Precision medicine is rapidly advancing, and keeping up with the deluge of discoveries requires new ways of processing information,” Dr. Gruber advises.
Sharing treatment recommendations, online learning, and artificial intelligence (AI) may help molecular tumor boards (MTBs) provide more appropriate recommendations for precision advanced cancer care, new research suggests.
“In this quality improvement study, our learning program significantly improved the quality of treatment recommendations by MTBs,” senior study author Takayuki Yoshino, MD, PhD, of the Department of Gastroenterology and Gastrointestinal Oncology of the National Cancer Center Hospital East in Chiba, and coauthors wrote in JAMA Oncology. “Treatment recommendations made by an AI system showed higher concordance than that for MTBs, indicating the potential clinical utility of the AI system.”
Worldwide, MTBs review comprehensive genome profiling test results of patients with cancer and develop specific treatment recommendations for each patient based on those results. Most treatment recommendations for biomarkers with poor evidence levels have been investigational new drug (IND) trials, and MTBs vary in the treatments they recommend.
Dr. Yoshino and colleagues investigated whether an online training program that shares treatment recommendations for biomarkers that have low evidence levels helps standardize the quality of MTB recommendations. They also assessed the efficacy of an artificial intelligence (AI)-based annotation system.
Overall, 14 individual physicians and 27 MTBs completed the study. The AI program made treatment recommendations using a database of published information about approved drugs, guidelines, and clinical trials.
Participants first made treatment recommendations for 25 simulated cases to test treatment concordance. They then participated in a one-day training program on making appropriate treatment recommendations, especially in IND trials related to biomarkers with low evidence levels. Participants also received a central committee’s treatment recommendations for cases with common genomic alterations.
After the training session, participants made treatment recommendations on 25 new simulated cases, and the study team compared their pre- and post-training results. Before and after the training, the central committee evaluated concordance between participants’ treatment recommendations and the central committee’s treatment recommendations.
The researchers analyzed the proportion of MTBs that met prespecified accreditation criteria for post-training evaluations (around 90% concordance with high evidence levels and around 40% with low evidence levels).
They found that the MTB accreditation rate for post-training tests was significantly higher than the prespecified threshold: 55.6% (95% CI, 35.3%-74.5%; P<0.001), but it was only 35.7% (95% CI, 12.8%-64.9%; P=0.17) for individual physicians.
MTB concordance increased from 58.7% (95% CI, 52.8%-64.4%) before training to 67.9% (95% CI, 61.0%-74.1%) after training (OR, 1.40 [95% CI, 1.06-1.86]; P=0.02).
Concordance increased among physicians from 55.3% (95% CI, 42.3%-67.6%) before training to 61.0% (95% CI, 47.8%-72.8%) after training (OR, 1.26 [95% CI, 0.85-1.87]).
For the AI participant, pre- to post-concordance increased from 80.0% (95% CI, 60.0%-91.4%) to 88.0% (95% CI, 68.7%-96.1%; P=03), significantly higher than MTB concordance.
MTBs improved greatly in concordance of biomarkers with low evidence (OR, 1.32 [95% CI, 1.00-1.73]; P=.03), but physicians did not improve. (Table)
More Work Needed to Optimize Patient Care
For Stephen Gruber, MD, PhD, MPH, who was not involved in the study, the results suggest ways MTBs may better interpret equivocal genomic information in this era of comprehensive genomic profiling. Delivering precision medicine at scale is challenging, and how we interpret and use genomic alterations with low evidence levels is one of the most difficult experiences shared by clinicians across the globe, he says.
“AI tools are rapidly enhancing our ability to synthesize and process large amounts of complex data,” he adds. “The increased concordance observed with the application of AI in this study should lead to what I like to call augmented clinical intelligence.”
Dr. Gruber notes, though, that the AI participant was a tool to enhance consistency and performance; AI did not replace human clinical judgment or expertise.
“The simulated cases that serve as the basis for this excellent study are a great approach to measuring standardization, but clinical outcome data are needed to investigate whether these tools improve patient outcomes,” he says.
“Precision medicine is rapidly advancing, and keeping up with the deluge of discoveries requires new ways of processing information and bringing it to clinical practice,” Dr. Gruber advises. “Health systems need to deliver complex cancer care in an equitable manner that assures quality care for patients no matter where they are diagnosed or treated.”