13 January 2021

Digital Health 101: Artificial Intelligence in Medicine

It is evident that Artificial Intelligence (AI)1 is changing the face of medicine: it detects pathological findings, assists in early diagnosis of high-risk patients, adapts medication to illnesses in a more precise manner, and assists physicians in medical decision-making. For example, AI today identifies a “diabetic eye” – a medical condition that diabetic patients run a significant risk of contracting, and whose early detection is critical for treatment and prevention of blindness; AI “reviews” patients' data in the intensive care unit and helps identify those patients who are about to deteriorate; AI “marks” moles suspected of being malignant, and even assists in directing patients in the hospital to the ER.
 
As the number of AI applications expands, and the scope of its adoption in the medical diagnostic and treatment process grows, is it necessary to formulate a suitable regulatory framework. 
 
For example, use of AI technology raises the risk of discrimination, given the biases inherent in medical data bases2.  Moreover, difficulty arises in a “black box” situation, where there is no clear linkage between the input, the data entered (e.g. medical records of patients in similar condition) and the final output (such as a recommendation for a treatment). If we add to this the intricacy of long-term “learning” algorithms capable of generating different outputs even in similar situations, we see that existing legal categories fail to provide suitable solutions.
 
Presently, medical devices based on AI are classified in Israel according to existing legal categories, such as regulation of medical devices, in the absence of regulations addressing the unique characteristics of AI. For comparison's sake, in the U.S., the FDA is undertaking initial steps to regulate special AI characteristics, including a pilot program for prior approval of an AI product, while monitoring its “learning” during its lifecycle. The FDA recently published an “AI based software as a medical device action plan”3.  The FDA announced it would publish a draft of guidance in 2021. The draft will propose what should be included in the algorithm specifications, and in the algorithm change protocol, which explains how the changes will occur, while remaining safe and effective. 
 
At the same time, American and European regulatory bodies issue non-obligatory guidance regarding AI, which eventually may become obligatory regulations and influence Israeli regulation4

What must be considered when developing an AI-based product to be integrated in medical care?

As one develops software intended for use in medical care, it is necessary to address and deal with relevant regulatory requirements. Note that the definitions of "medical device" and "software as medical device” are complex, and therefore it is important to consult in advance on the matter. 
 
Furthermore, one must ensure compliance with regulatory requirements concerning privacy protection while using medical information to develop the product (see previous post on secondary use of medical data) and during the lifecycle of the product, inasmuch as it continues to collect data or analyze it.
 
One must examine whether the use of AI violates existing regulations, such as the Physicians' Ordinance, which stipulates that only an accredited physician is permitted to undertake acts that constitute medical practice. From this we may deduce that it is possible that use of AI for certain medical procedures can be undertaken only when subject to a physician's evaluation and decision. This raises questions: is the physician entitled to set a sphere of action in which AI will “decide”?  Must every AI “recommendation” be subject to a physician's approval? Is the ability of a physician to supervise a system and halt it in real time – sufficient? To this we add the fear that in the future an argument might be raised that this is a diversion from accepted practice and can be deemed as medical negligence. 
 
In addition, from the non-obligatory guidance published in the AI field, some principles emerge that might eventually become mandatory regulation. These are the main ones:
 
Transparency – it is important to document the data base, as well as the method whereby AI formulates products to identify and prevent biases. The following questions must be addressed: What is the source of the data and how are the data processed? How was the algorithm developed and how was it validated?  How do the algorithm's elements work together? How does the algorithm produce its output? Transparency aims at detecting and mitigating potential bias.
 
Explainability – An explanation must be provided as to how the product works, the product's functioning, the type of information it processes, its significance, its limitations, and its "decision-making" process. The explanation must enable the physician to understand the process and the results of the AI product and to be able to explain it to the patient.
 
Fairness – It must be ascertained that the product is robust and valid to prevent discrimination; one must take care to verify both the information upon which the algorithm was developed and its output, i.e. its recommendation, in order to prevent discrimination of weak populations.
 
Protection of patients' rights – all the guidelines relate to the importance of respecting human rights and the impact on the integrity of a person's body, his/her right to privacy and autonomy.
 
Precise, reliable and well protected technology – the system must be precise, reliable, and protected, so that it can replicate the outcome, rectify errors, and prevent manipulation or harm to patients. The FDA’s action plan encourages the development of good machine learning practices.
 
Mechanisms for supervision and control, as well as human supervision – it must be ascertained that medical decision-making remains with the health care provider, as referred to above, and that supervision and control mechanisms have been established to identify and correct errors in real time.
 
These are general principles; we will at a later date present a proposal that is based on the principles and provides practical tools for developing and utilizing AI products in the medical field.
 
Summary
Each new technology challenges existing regulation. As AI becomes an integral part of the medical practice, it is important to consider the issues raised already during the product's planning and development stage and address developments that may influence the product's approval and its use. Proper advanced planning will help overcome potential obstacles later, and lead to a safer, better product for the patient's medical care.  
 
For questions or counseling: tamart@arnon.co.il
This post is intended to present general information only. It should not be construed as legal advice or legal opinion and should not be relied upon
 
 

1  For an in-depth review of legal aspects of AI in medicine, see: Roy Keidar and Tamar Tavory, “Legal and Regulatory Aspects of AI in Medicine,” EMERGING TECHNOLOGIES: THE ISRAELI PERSPECTIVE (Lior Zemer, Dov Greenbaum and Aviv Gaon, eds, Nevo, 2021, Hebrew). 
2  See Heidi Ledford “Millions Affected By Racial Bias in Health Care Algorithm.” NATURE 31 Oct. 2019.  
3  Artificial intelligence/machine learning (AI/ML)-Based Software as Medical Device (SaMD) Action Plan, January, 2021., https://www.fda.gov/medical-devices/software-medical-device-samd/artific...
4  Examples of AI’s guidance include the FTC guidance on using Artificial Intelligence and Algorithms, April 2020, https://www.ftc.gov/news-events/blogs/business-blog/2020/04/using-artifi...
Council of Europe, Ad hoc committee of Artificial Intelligence, Towards Regulation of AI system-  Global perspectives on the development of a legal framework on Artificial Intelligence systems based on the Council of Europe’s standards on human rights, democracy and the rule of law, Dec 2020 at https://rm.coe.int/prems-107320-gbr-2018-compli-cahai-couv-texte-a4-bat-.... This report also refers to report of the Israel’s committee on AI Ethics and Regulation, Nov. 2019, headed by Prof. Karine Nahon.

Subscribe for updates and news