
AI Misdiagnosed My Illness. Who is Liable?
Once upon a time, artificial intelligence (AI) fell squarely into the realm of science fiction. These days, however, the use of AI has become commonplace in almost every industry, including healthcare.
From diagnoses to imaging, administrative processes to drug development, AI is rapidly establishing its foothold in the healthcare industry and shaping our expectations and methods of doing business. But while these advances often provide for greater efficiency and life-saving improvements, they also present many new issues and potential pitfalls. This then creates questions as to who may be responsible when the use of AI causes a misdiagnosis and harms a patient.
Outlined below is some important information about healthcare liability, who may be held responsible when this technology harms a patient, and how the experienced attorneys at Sommers Schwartz can help you navigate this emerging new area of law.
The Rise of AI Diagnoses in Healthcare
Recent advances in AI technology are revolutionizing how healthcare providers track, identify, and treat diseases. Advanced algorithms give doctors faster, more accurate, and personalized results. In fact, in many studies, large language model (LLM) AI technology (such as ChatGPT) significantly outperformed human diagnoses across multiple scenarios and skill levels.
However, despite these promising results, AI isn’t infallible. This technology is only as accurate and innovative as the humans who feed it information. As a result, there are shortcomings and pitfalls in using AI in healthcare. Some problems arise when healthcare providers rely solely on AI to diagnose a patient’s illness. These problems include:
- Inherent biases in tracking and processing data.
- An inability to understand complex medical cases.
- Data privacy concerns.
- Difficulty with rare or atypical diseases.
- The potential for misinterpretation.
- Failure to accurately process information.
- Programming glitches and malware corruption.
This list highlights the importance of clinician oversight when using AI to diagnose an illness. Because while these programs have a lot to offer medicine, they certainly aren’t failproof. In fact, in some pediatric studies, AI had a misdiagnosis rate of over 80%.
Who is Liable When AI Gets It Wrong?
Now that the AI genie is out of the bottle, there’s no question this technology is here to stay. With so many potential benefits, doctors will want to continue to utilize these tools. But what does that mean for your diagnosis and treatment? And who is liable if the technology gets it wrong?
Since AI is relatively new to healthcare, most states (including Michigan) do not have laws explicitly addressing liability in AI-generated diagnosis situations, nor does the federal government. Instead, most courts rely on existing legal frameworks to address medical malpractice liability involving AI.
These laws place most of the responsibility directly on the shoulders of the physicians directing the care. However, they also include healthcare organizations and technology vendors in certain situations. Here’s how.
A Physician’s Duty of Care in Michigan
In Michigan, healthcare professionals are responsible for providing patients with a certain level of care. This standard requires doctors to treat their patients with the same level of expertise and attention that other medical professionals would in the same situation.
When it comes to diagnosing an illness, this means your doctor must:
- Conduct a reasonable physical exam.
- Evaluate your medical and family history.
- Order appropriate diagnostic tests.
- Correctly interpret test results.
- Consider a reasonable differential diagnosis.
- Quickly consider and rule out serious or life-threatening conditions.
- Make timely referrals to specialists.
- Provide proper follow-up care and monitoring.
Under these rules, whether your doctor uses AI to help diagnose your illness or not doesn’t matter. However, physicians must not blindly rely on AI results if they choose to utilize these tools. Proper oversight is required. Otherwise, the doctor can be held liable for the harm caused by the wrong diagnosis.
Is Anyone Else Liable for AI Misdiagnosis?
Most of the time, healthcare providers bear the primary responsibility of diagnosing their patients—whether using AI or not. However, in some situations, healthcare organizations and even technology vendors could share some of that burden.
A large healthcare organization might share liability if they were negligent in choosing an unreliable or inappropriate AI system. They could also be responsible for poor implementation or not properly training their employees.
Similarly, a technology vendor might share responsibility under Michigan’s product liability laws if it knowingly sells or markets a faulty AI program or was negligent during its testing and development. They might also be liable for not providing adequate warnings about the program’s limitations or shortcomings.
Did AI Technology Misdiagnose Your Condition?
At the end of the day, AI is only as intelligent as the humans who utilize it. This technology cannot think or create independently. It is merely a tool that can help medical professionals do a better job but shouldn’t be used to replace them. If you were harmed by a medical misdiagnosis involving AI, we want to hear from you. Contact Sommers Schwartz for a free consultation and let our team of qualified attorneys help determine who is responsible for your harm and get you the compensation you deserve.