The following is a summary of “Performance of Google Bard and ChatGPT in mass casualty incidents triage,” published in the January 2024 issue of Emergency Medicine by Gan, et al.
For a study, researchers sought to assess and compare the efficacy of ChatGPT, Google Bard, and medical students in conducting START triage during mass casualty incidents.
They conducted a cross-sectional analysis to evaluate ChatGPT, Google Bard, and medical students’ performance in mass casualty incident (MCI) triage using the Simple Triage And Rapid Treatment (START) method. A validated questionnaire containing 15 diverse MCI scenarios assessed triage accuracy and underwent content analysis across four categories: “Walking wounded,” “Respiration,” “Perfusion,” and “Mental Status.” Statistical analysis was employed to compare the outcomes.
Google Bard demonstrated a significantly higher accuracy of 60%, whereas ChatGPT achieved an accuracy of 26.67% (P= 0.002). In a previous study, medical students achieved an accuracy rate of 64.3%. However, no significant difference was observed between Google Bard and medical students (P = 0.211). Qualitative content analysis of “walking-wounded,” “respiration,” “perfusion,” and “mental status” indicated superior performance by Google Bard over ChatGPT.
Google Bard exhibited superior performance compared to ChatGPT in correctly executing mass casualty incident triage, achieving an accuracy of 60% compared to ChatGPT’s 26.67%. This discrepancy was statistically significant (P = 0.002).
Reference: sciencedirect.com/science/article/pii/S0735675723005764