04 August 2021

Artificial Intelligence in Medicine – The Federal Trade Commission’s Guidance 

04 August 2021

Artificial Intelligence in Medicine – The Federal Trade Commission’s Guidance 

Artificial Intelligence (AI) is the era’s most promising technology, underlying new achievements in a wide variety of fields, from automated cars and cyber security to medicine and finance. 
In the last update, we reviewed the European Commission’s proposal for regulation (“the EU AI proposal”). For the EU proposal’s effect on medical devices and their regulation, see here.
In this post, we describe the U.S. Federal Trade Commission (FTC) guidance,  “Aiming for Truth, Fairness and Equity in Your Company’s Use of AI” (“the FTC guidance”)1.  This follows an earlier guidance published in 20202
It is interesting to note that both regulations apply to all AI systems, but refer explicitly to AI software in the medical field and AI based medical devices, acknowledging their unique characteristics and the need for their regulation.

The Federal Trade Commission

Currently, there is no specific law regarding AI systems in the U.S. The FDA has approved AI based software as medical devices, through the current legal framework of medical device regulation3.  In addition, the FDA began a pre-cert program for approving and monitoring AI based products of certain companies throughout their lifecycle4.  Furthermore, the FDA has published its action plan on the matter and declared that a new draft concerning AI and software as medical device guidance will be published in 20215.  
However, it seems that the FTC does not wait for future regulation, and it stepped up to answer some of AI’s legal and ethical challenges. In its last guidance, the FTC warned that it will use its authority to take enforcement action against companies that sell or use AI systems that result in discrimination based on race or other protected classes. The FTC’s authority is based mainly on the prohibition of unfair and deceptive practices (the FTC Act, Section 5).
In particular, the FTC expressed concern on how an “apparently ‘neutral’ technology can produce troubling outcomes, including discrimination” and provided as an example, COVID-19 prediction models that appear to include a high risk of bias6.  
 
Here are some key take-aways for companies operating in the medical field in the USA. 
  • A company should build a solid and inclusive data base to prevent outcomes of bias and discrimination – the FTC advises considering ways to improve the data set, design the model to account for data gaps, and in light of  any shortcomings,  limit the use of the model.
  • A company should examine both the input and the output to prevent discrimination. Specifically, the FTC advises testing algorithms before use and periodically after marketing.
  • A company is advised to act with transparency and independence – the FTC encourages using transparency frameworks, publishing the results of independent audits, and opening data or source code to outside inspection.
  • A company should be cautious about exaggerating the algorithm’s ability to deliver fair and unbiased results.
  • A company should be careful as to how it acquires the information which powers its model and not to mislead users as to its use. 
  • According to the FTC’s 2020 guidance, if customers and consumers are denied something of value based on algorithmic decision-making, the company is required to explain why. This means that the company must know what data is used in the model and how such data is utilized to reach a decision. Also, the company must be able to explain that to the consumer. Changing the terms of a deal based on automated tools also requires notifying the consumers about such changes. Under the FTC Act, if a practice “causes or is likely to cause substantial injury to consumers that is not reasonably avoidable by consumers and not outweighed by countervailing benefits to consumers or to competition – the FTC can challenge the use of that model as unfair.” Accordingly, a company should balance its product’s benefit with the harm it causes to its customers and consumers, and make sure it does more good than harm.
  • A company is required to be accountable for its algorithm’s performance, and the FTC warns that if it is not, it will take action in the proper cases. In its 2020 guidance, the FTC recommends asking the following questions: 
  1. How representative is your data set?
  2. Does your data model account for biases?
  3. How accurate are your predictions based on big data?
  4. Does your reliance on big data raise ethical or fairness concerns?
The FTC guidance is a warning signal to companies activating AI systems in the USA. 
A recent publication indicates that the most frequent problems in AI’s use in the medical context are unrepresentative data samples, high likelihood of model overfitting (excessively tuning itself to the trained data set), and imprecise reporting of the study population and intended model use7.  These flaws might increase health disparities already existing in the healthcare system.
Companies active in the USA in the medical field are advised to act according to the FTC’s guidance and to be particularly aware of the shortcomings of unrepresentative medical data and the potential result of discrimination.
 

4. Pre Cert FDA, Digital Health Software Precertification (Pre-Cert) Program, FDA (Jul. 18, 2019), www.fda.gov/medical-devices/digital-health/digital-health-software-precertification-pre-cert-program

5. FDA, Artificial Intelligence and Machine Learning (AI/ML) Software as a Medical Device Action Plan (Jan. 2021), https://www.fda.gov/media/145022/download 
6.  Wynants L, Van Claster B, Bonten MMJ et al.,  “Prediction models for diagnosis and prognosis of Covid 19 infection: systematic review and critical appraisal.” BMJ 2020; 369 m1328. Cited in Roosli, Rice, Hernandez-boussard, “Bias at Warp Speed: How AI May Contribute to the Disparities Gap in the Time of COVID – 19,” Journal of American Medical Informatics Association, 28(1) January 2021. https://academic.oup.com/jamia/article/28/1/190/5893483 
7. See note 6.

Subscribe for updates and news