1. Unlu and colleagues compared RECTIFIER, a large language model (LLM), against human study staff to evaluate patient eligibility for a clinical trial.
2. RECTIFIER demonstrated similar accuracy, slight superiority in specificity and sensitivity, and a lower per-patient cost, as compared to the study staff.
Evidence Rating Level: 2 (Good)
Study Rundown: A clinical trial must screen potential participants based on the inclusion and exclusion criteria – a task traditionally done manually by study staff. However, this approach is prone to human error and incurs significant costs due to its resource-intensive nature. Unlu and colleagues developed RECTIFIER, a program powered by GPT-4 (an LLM), to identify eligible study participants using patient responses. Gold standard answers to 13 target criteria questions were first created by a blinded expert clinician. Then, the researchers compared the performance of three RECTIFIER variants against study staff at screening 1509 patient notes for a heart failure clinical trial. The performance metrics were sensitivity, specificity, accuracy, and the Matthews correlation coefficient (MCC). The study found that both RECTIFIER and study staff’s answers closely aligned with the gold standard, with accuracies greater than 90% for both methodologies. With regards to determining symptomatic heart failure, RECTIFIER had an accuracy of 97.9% versus 91.7% for the study staff. Additionally, a variant of the RECTIFIER model incurred only a cost of $0.02 per patient screened. Overall, this study demonstrated an LLM’s ability to effectively screen patients for clinical trials at a low cost.
Click here to read the study in NEJM AI
Relevant Reading: Artificial Intelligence Applied to clinical trials: opportunities and challenges
In-Depth [randomized controlled trial]: RECTIFIER was developed to determine patient eligibility for an ongoing heart failure clinical trial. Unlu and colleagues used information from 100 screened patients to design and evaluate RECTIFIER prompts and used another 400 randomly selected patients to validate the model. Subsequently, information from another 1509 patients was reviewed by an expert clinician and classified as either eligible or ineligible based on the clinical trial criteria. This data set was used to compare the performances of three RECTIFIER variants (RECTIFIER with a single-question strategy, a combined-question strategy, and one powered by GPT-3.5 instead of GPT-4) against that of the study staff. All four groups demonstrated answers closely aligned with the gold standard. Study staff achieved an accuracy that ranged from 91.7-100%, while RECTIFIER’s accuracy reached 97.9-100%. Regarding sensitivity and specificity, RECTIFIER demonstrated a slight superiority in both metrics (92.3% vs 90.1% for sensitivity, 93.9% vs 83.6% for specificity). RECTIFIER’s performance slightly decreased with the combined-question strategy, in which inclusion and exclusion questions were asked together, and with GPT-3.5. Finally, using the combined question approach, it only costs 2 cents per patient screened with RECTIFIER, with the cost being 11 cents using the single-question strategy. The authors concluded that LLMs, such as RECTIFIER, have the potential to screen patients for clinical trials more efficiently and greatly reduce the associated costs. However, the results may have limited generalizability due to its focus on screening heart failure, and likely require further refinement for broader clinical applications.
Image: PD
©2024 2 Minute Medicine, Inc. All rights reserved. No works may be reproduced without expressed written consent from 2 Minute Medicine, Inc. Inquire about licensing here. No article should be construed as medical advice and is not intended as such by the authors or by 2 Minute Medicine, Inc.