claims

Our Legal System May Not Be Able to Handle Black-Box AI Malpractice Claims

Physicians who rely on the algorithms of AI to produce diagnosis negligently place patients’ wellbeing in the hands of new technologies and the current legal system is not adequate to address injuries when something goes wrong. These new “black box” solutions are advancing, however, they do not currently outperform human diagnostic and treatment recommendations. While AI can answer many questions, it’s raising a significant number of questions regarding the future of patient safety and medical malpractice.

Innovation & Risk

The possibilities and liabilities raised by the widespread adoption of AI technologies in healthcare are enormous. If the technologies prove they can deliver consistent, repeatable results, they have the potential to improve patient safety and minimize the risk of medical malpractice.

However, they also have the potential to cause great harm and physicians may feel obligated, indeed pressured, to adhere to AI recommendations and diagnosis rather than trusting their own judgment. They may do so even when years of medical training tell them that the “black box” is clearly wrong.

Limited Understanding of Innovative Technologies

86% of healthcare organizations, tech vendors, and life science companies utilize at least one form of AI in their operations. However, most of the algorithms used by these programs are classified as “unknown.” This means that the mechanisms behind these technologies cannot be determined and clearly traced. The reason for this is that the AI networks are designed to mimic neural networks. As a result, these “self-teaching” systems rely on highly complex and often opaque decision making processes.

Current Tort Laws and AI Technology

Existing tort laws are designed to account for human decision making and do not take into account the decision-making processes of machines. Moreover, because AI is unpredictable, this limits the ability to pursue product liability claims. If the designers cannot predict how an autonomous product will function, they cannot be held liable for the product’s actions.

Complicating the issue of liability is the vast number of people involved in the creation of an AI system. Because of the opaque nature of the thinking process within the neural network, it’s difficult to identify whether it was the software developer, hardware engineer, designer, manufacturer, or other entity who was responsible for creating an error that resulted in a misdiagnosis, missed diagnosis, or poor treatment recommendation.

Current law doesn’t adequately protect patients from AI errors. Modifications to existing law or the passage of new laws are required to protect patient safety in the 21st century.

Categories