May 2023 President’s Message

May 2023 President’s Message

Need to be cautious about relying on artificial intelligence

by Maurice Duggins, MD —

MSSC held a great membership meeting May 2 on artificial intelligence. The discussion was engaging, informative and, at times, eye-opening.

I would like to thank our keynote speaker, Monica Coley from Amazon, and our three panelists, who included Sam Antonios, MD. I appreciated them sharing their perspectives on artificial intelligence and helping answer some of our questions related to AI in medicine. Thanks also to Wichita State University for organizing the event and NetApp for hosting it.

As physicians, we are trained to make diagnoses and to treat patients accordingly. We never want to miss a diagnosis. As overachievers, we want our diagnostic accuracy to be at or very close to 100%.

By one definition, failure to either establish an accurate and timely explanation of a patient’s health problems or communicate that explanation to the patient is a diagnostic error. So it’s no surprise that when we hear about a tool that can reduce our diagnostic errors and potentially increase our diagnostic accuracy, we want to acquire this tool.

Artificial intelligence is such a tool.

To cite one example, AI has increased predictability and accuracy in the reading of pathology slides. Cancer is something no one wants to miss, so if AI can help increase the detection of these slides with colon, prostate or skin cancer, we welcome it.

AI is also helping to look for breast cancer on mammograms. AI algorithms also have aided in predicting whether a terminal cancer patient is likely to die within six months.

If there is digital information, AI may help improve the reading of that information.

However, as with all things new in medicine, we need to take a cautious approach to avoid creating new problems that could’ve been prevented. At a minimum, we need to recognize and correct those problems before they become widespread.

Some of you may have experienced your AI-aided cameras mislabeling certain objects as humans, certain humans as animals or certain animals as objects. Because something is computer generated does not mean it is the best or most accurate information on its own. And we must also remember that the programs or algorithms behind AI are first created by fallible humans like us. Hence, biases and predispositions can be embedded into these tools.

Over the years, clinical decision rules/tools have come and gone. Their purpose has been to improve on our collected information from the patient to avoid both errors of commission and errors of omission. The best thing about CDTs are the validation studies that are done both in large and in small spaces and communities that provide health care. Prospective evaluations to validate the information and outcomes are still important. We should be willing to wait on validation studies before accepting all things labeled as AI or machine-based learning.