Applying artificial intelligence in emergency medicine may lead to advancements in care, but it also poses some ethical and practical concerns.
The application of artificial intelligence (AI) in medicine promises great advancements in care. This new technology, however, poses some ethical and practical concerns for practitioners, specifically in the field of emergency medicine. Kenneth V. Iserson, MD, MBA, explores and addresses these concerns in a guide he developed and published in the American Journal of Emergency Medicine.
Informed Consent
Dr. Iserson spoke with Physician’s Weekly regarding key concerns about using AI in the emergency medical setting, specifically the importance of informed consent. Dr. Iserson explains, “Informed consent is a key element of Western medical practice. It helps preserve patient autonomy.”
Applying informed consent appropriately to the application of AI poses some challenges. Dr. Iserson elaborates, “To provide this information [informed consent] to patients, physicians first must understand the big picture of how AI systems are developed, function, and integrated into clinical medicine. Key to this is the understanding that due to their source training material and programming, AI may make errors, be biased, may not produce the same answer every time, and not even a system’s developers may be able to explain how it came to a decision.”
Application of AI
He continues, “Physicians must also know where in their practice AI is used—before they see the patients (triage), in nursing assessments, evaluation of lab and imaging results, to generate differential diagnoses, etc.” Furthermore, a complete understanding of AI includes knowledge of its drawbacks. As Dr. Iserson clarifies, “Limitations: A serious limitation is that if AI is ‘baked into’ the system so that it is so tightly integrated that patients cannot refuse its use, informed consent will be useless. Patients and clinicians will be concerned about privacy and accuracy. Privacy issues come with knowledge about how safe the information that the AI program acquires from the patient is. Will it be used or made available to teach other AI programs or be otherwise accessible?”
In addition to privacy concerns, Dr. Iserson also discussed an understanding of AI’s precision, “As for accuracy, they [practitioners] will need to be able to explain their system (and, of course, have a system) for resolving discrepancies between the physician’s plan/diagnosis and that recommended by AI. The article suggests several that are currently in use.”
Administrator Involvement
Perhaps the greatest challenge in integrating AI into emergency medicine is its ever-evolving nature and the administrator’s involvement in its execution. Dr. Iserson addresses these concerns: “Lastly, this is a rapidly developing area. Lots of money is being made, and administrators who often will decide whether to use AI may be more concerned with their own and the institution’s bottom line than the efficacy and accuracy of the AI system, its applicability, the purpose for which it is being used, and the physician and patient’s autonomy and welfare.”
When asked about the future of AI and its use in emergency medicine, Dr. Iserson shared, “In the future, I see the need for incorporating honest, factual training about AI into every part of the physician’s education—from medical school to CME. The field will constantly change, and the big players will lure medical institutions and physicians to use (sometimes inappropriately) their AI systems, probably using the same sales techniques now used by the pharmaceutical and medical equipment industries.”