Jago Grahak Jago

Artificial Intelligence and Healthcare: Legal risks and mitigation strategies

Oct 9, 2024

Disruptive technologies such as AI, digital diagnostics and therapeutics, and machine learning (ML) are revolutionising healthcare by enabling unprecedented growth and innovation. 79per cent of the healthcare organisations are utilising generative AI and indust1Y is transforming at a rapid pace. AI is enhancing patient care and management, through telemedicine, drug discovery, imaging analysis, monitoring, and predictive analytics, while raising challenging questions of law relating to professional malpractice, privacy and product liability claims.

Key legal risks

One of the key issues facing AI arises from the limitations of uploading data into its algorithm for ML. As the quality of AI depends upon the quality of data used to train, the risk of data inputs being incomplete, selective, or manifesting a narrow understanding or unrepresentative of the total population and potential bias cannot be underestimated in healthcare. Extensive monitoring of information used for algorithm, assessment of benefit-risk profile for intended use and evaluation for potential bias is therefore pertinent to avoid legal consequences for stakeholders.

AI misinformation or inaccuracy poses a considerable risk in healthcare, where precision is paramount. While AI fosters innovation, it also amplifies the risks of wrongful diagnosis or treatment, generating plausible but incorrect or misleading information, making it crucial for healthcare professionals to critically evaluate and validate outputs. AI’s inaccurate diagnosis may have multiple liability and costs ramifications.

AI’s data intensive nature exposes patients’ personal and sensitive health records vulnerable to cybersecurity and privacy breaches. Healthcare organisations must implement robust security protocols to prevent data breach. Recent data breaches in India include the ransomware attack on the All India Institute of Medical Sciences and data leak from the Indian Council of Medical Research, highlight existing vulnerabilities of the Indian healthcare system.

Statutory and common law compliance impacts all stakeholders. While India is yet to implement a comprehensive legislation regulating the use of AI in healthcare, experience from other matured jurisdictions may help in developing a robust and efficient legal framework.

The World Health Organisation (WHO) mandates the regulation of digital and public health, the United Nations Charter and European Union’s international health regulations also require harmonisation of regulations governing AI use in healthcare. However, operators currently need to navigate complex regulatory landscapes to avoid liability based on local requirements.

Most AI tools leverage preliminmy open-source content and is amenable to greater infringement claims. Understanding the ownership and licensing of AI technology is crucial to prevent infringement claims.

Determining liability attributed to AI related inaccuracies, across multiple stakeholders (hospital, developer, licensor, and physicians) is challenging. Transparency in AI’s decision-making processes is vital to ensure accountability. For example, ethical issues may arise in AI driven diagnosis, which often lacks transparency, making it difficult for physicians to explain the reasoning behind a decision, and allocate responsibility for any liability particularly when disclosing the underlying AI bias to the patients.

AI also raises antitrust concerns as its’ use can lead to algorithmic collusion among competitors, inadvertently fixing prices, which is closely scmtinisecl by competition law authorities. While attribution of liability in cases involving ‘algorithmic collusion’ is evolving, it is impoltant to assess this risk and consider monitoring algorithmic pricing tools to detect and prevent such situations.

Medical liability

Medical negligence under torts would typically consider severity of the injmy, expected standard of care and the AI tools’ causal relationship to the injmy to allocate liability. Under vicarious liability, operators may be held liable for the acts or omissions of the doctors or employees. An AI tool construed as a product, may entail strict or product liability (depending upon severity) or design defect claims against the developers, manufacturers or licensors, while wrongful operation of AI, may trigger professional malpractice claims. An AI tool deployed may be considered as an agent of the organisation or physicians using it, capable of holding the principal liable for breach.

Jurispmclence is rapidly evolving with application of AI in healthcare becoming an integral part of patient care. The Texas Coult of Appeals (June 2024) held an AI based medical device manufacturer liable for a defective product, for providing an erroneous guidance to a surgeon. The U.S. Comt of Appeals (November 2022) held the developer and seller of a drug-management software liable for a product liability and negligence claim due to a defective AI user interface, leading physicians to mistakenly believe they had scheduled medication, which had not been scheduled. The Supreme Coult of Alabama (May 2023) held a physician liable for relying upon an erroneous AI software recommendation for cardiac health screening that wrongly classified a young adult with a family history of congenital healt defects as normal.

Risk mitigation strategies

To mitigate risks associated with AI in healthcare, stakeholders must upskill their workforce with comprehensive manuals, trainings on safe usage and troubleshooting errors. Developers must transparently disclose information around existing biases, providing mechanisms to explain decision making and develop processes for data security. Operators should also inform patients on AI usage and its role in their diagnostic or treatment decisions, obtain informed consent from patients’, providing an oppoltunity to withdraw consent, anonymise sensitive information and establish multiple layers of encryption.

Appropriate risk allocation methods and strategies should be adopted by operators specifically identifying the obligations for cultailing liability, indemnification and insurance coverage in case of erroneous output or misuse.

Conclusion

Despite the foreseen risks, diverse benefits of using AI in healthcare, make its adoption an imperative for continued relevance and maintaining a competitive edge, unveiling a landscape brimming with potential and complexity. AI adds efficiency and innovation to the forefront, subject to its actors understanding and mitigating the risks and associated liability, to foster a transparent and ethical environment of trust.

This article has been written by Aditya Patni, Paltner and Achint Kaur, Counsel at Khaitan & Co.

Source: Economic Times

Leave a Comment

Your email address will not be published. Required fields are marked *