ICMR releases draft ethical guidelines for adoption of AI in biomedical research and healthcare
Mumbai, September 27, 2022:
The Indian Council of Medical Research (ICMR) has released draft ethical guidelines for application of artificial intelligence in biomedical research and healthcare.
This draft, formulated by ICMR’s Expert Group aims to guide effective yet safe development, deployment and adoption of AI based technologies in biomedical research and healthcare delivery. These guidelines are to be used by experts and ethics committees reviewing research proposals involving use of AI based tools and technologies.
These guidelines apply to AI based tools created for health and biomedical research and applications involving human participant or and/or their biological data. Considering the far-reaching implications of AI-based technologies in healthcare, these guidelines are applicable to health professionals, technology developers, researchers, entrepreneurs, hospitals, research institutions, organization(s), and laypersons who want to utilize health data for biomedical research and healthcare delivery using AI technology and techniques. Both health care and AI technologies are rapidly advancing and so shall the associated ethical dimensions.
The induction of AI into healthcare has the potential to be the solution for significant challenges faced in the field of health care like diagnosis and screening, therapeutics, preventive treatments, clinical decision making, public health surveillance, complex data analysis, and predicting disease outcomes. This list is likely to grow continuously in future. AI technology and machine learning (ML) is being used in the field of drug discovery and epitope identification for vaccine development and has the potential to accelerate the process and make it more cost effective.
Advances in AI have opened up new opportunities to tackle shortage of skilled workforce. AI and ML tools allow handling large and diverse datasets efficiently with a high accuracy and offer data driven solutions for predicting the risk and strategies to mitigate diseases. The incorporation of AI-based tools and techniques is expected to improve healthcare delivery by making healthcare accessible and affordable and improving the quality of care provided.
An ethically sound policy framework is essential to guide the AI technologies development and its application in health care. Further, as AI technologies get further developed and applied in clinical decision making, it is important to have processes that discuss accountability in case of errors for safeguarding and protection. Just like any other diagnostic tool, AI-based solutions themselves cannot be held accountable for its decisions and judgments. It is therefore important to have an assignment of accountability and responsibility at all stages of development and deployment of AI for health.
While ICMR’s general principles related to biomedical research and healthcare delivery are applicable to AI for heath, the field also has some unique ethical considerations.
The draft guidelines have stressed on the adoption of ten ethical principles addressing issues specific to AI for health. These principles are patient-centric and are expected to guide all the stakeholders in the development and deployment of responsible and reliable AI for health.
Autonomy is one of the principles. When AI technologies are used in health care, there is a possibility that the system can function independently and undermine human autonomy. The application of AI technology into healthcare may transfer the responsibility of decision-making into the hands of machines. Humans should have complete control of the AI based healthcare system and medical decision-making. AI technology should not interfere with patient autonomy under any circumstances. The ‘Human in The Loop’ (HITL) model of AI technologies gives room for humans to oversee the functioning and performance of the system. Patients must have complete autonomy to choose or reject AI technologies. There should be effective and transparent monitoring of human values and moral consideration at all stages of AI development and deployment, says the draft.
Second principle is safety and risk minimization. protection of safety, dignity, well-being, and rights of patients/participants must have the highest priority. A robust set of control mechanisms is necessary to prevent unintended or deliberate misuse, discrimination or stigmatization. AI technologies are prone to cyber-attacks and can be exploited to get access to sensitive and private information, thus threatening the security and confidentiality of patients and their data. It must be ensured that the data is completely anonymized and offline or delinked or from the global technology for its final use. The Ethical Committee (EC) and other stakeholders must ensure that there is a favorable benefit-risk assessment, it stated.
Third is Trustworthiness. Trustworthiness is the most desirable quality of any diagnostic or prognostic tool to be used in AI healthcare. Clinicians need to build confidence in the tools that they use and the same applies to AI technologies. Conflict of interest arising at any stage of development must be disclosed and available on public platforms. AI technologies developed outside India must have the ethical responsibility to be explicitly transparent.
Fourth is data privacy. AI-based technology should ensure privacy and personal data protection at all stages of development and deployment. Individual patients’ data should preferably be anonymized unless keeping it in an identifiable format is essential for clinical or research purposes. All algorithms handling data related to patients must ensure appropriate anonymization before any form of data sharing. Current Data Protection Act available in India is the IT Act, 2000.
To ensure privacy and security of health data, the Indian government is bringing a new healthcare data protection law - Digital Information Security in Healthcare Act (DISHA) Bill and Personal Data Protection (PDP); these will be binding on AI technology ethical guidelines. Users should have control over the data that has been collected from them for the purpose of developing and designing AI technologies for health care. Users should be provided with the provision to access, modify, or remove such data from AI technology at any point in time, the guidelines pointed out.
Fifth is accountability and liability. Accountability is described as the obligation of an individual or organization to account for its activities, accept responsibility for their actions, and to disclose the results in a transparent manner. AI technologies intended to be deployed in the health sector must be ready to undergo scrutiny by concerned authorities at any point in time. AI technologies must undergo regular internal and external audits to ensure their optimum functioning. These audit reports must be made publicly available. The AI-based solutions may malfunction, underperform, or make erroneous decisions with a potential of harm to the recipient especially if it is left unsupervised. The health professional who will use the technology, will assign responsibility, it added.
Sixth is optimization of data quality. Before deploying AI technologies, the possibilities of biases must be considered, identified and thoroughly scrutinized. Training data must not have any sampling bias. Such sampling bias may interfere with data quality and accuracy. Researchers must ensure data quality.
Seventh is accessibility, equity, inclusiveness and inclusion. AI developers and concerned authorities have to make sure of fairness in the distribution of AI technology. Organizations should endeavour to provide equal opportunity and access to AI technology among different user groups. Special consideration must be given to those groups who are underprivileged or lack the infrastructure to access such technology. Priority must be given to such groups.
Eighth is collaboration. Data sharing for any national or international collaboration while safeguarding privacy and security is very critical in the case of using health care data in an AI technology’s research and development as it might contain very sensitive information about a participant. Indian laws and guidelines (DISHA & PDP guidelines) are to be adhered to.
Ninth is non-discrimination and fairness principles. The data set used for the training algorithm must be accurate and representative of the population in which it is used. The researcher has the responsibility to ensure data quality. In case any unfortunate events arise from the malfunctioning of the AI technology, then there should be an appropriate redressal mechanism for the victim.
Tenth is validity. AI technology in healthcare must undergo rigorous clinical and field validation before application on patients/participants. This is necessary to ensure safety and efficacy. Application of AI based decisions for clinical applications can lead to clinical mismanagement or a potential health hazard. Therefore, any AI-based tool needs to be validated.
Besides this, the draft guidelines also contained guiding principles for stakeholders involved in development, validation and deployment of AI in healthcare, ethical review procedures in medical AI, informed consent process and governance of AI technology use for health care and research. PhamaBiz