01 June 2021

The New EU Artificial Intelligence Proposal and its Effect on Medical Device Regulation

01 June 2021

The New EU Artificial Intelligence Proposal and its Effect on Medical Device Regulation

Recently, the European Commission published a proposal for regulation of the European Parliament and of the Council on Artificial Intelligence (the "EU AI proposal"). The EU AI proposal aims to promote AI and innovation while protecting health, safety, and human rights. This brief surveys the main implications of the EU AI proposal on EU medical device regulation[1] (which, itself, is currently undergoing a regulatory change).

The Scope of the EU AI proposal

            The EU AI proposal would have an extraterritorial application, obligating providers who develop or offer products in the EU, users of AI systems located in the EU, and providers and users of AI systems that are located in a third country, where the output produced by the system is used in the EU. Certain obligations apply to other operators, such as authorized representatives, importers, and distributors.

The Classification of Medical Devices According to the EU AI Proposal

            The EU AI proposal classifies AI systems[2] according to the level of risk involved[3] and in most cases classifies AI-based medical devices as high-risk AI systems.[4] The risk categorization under the EU AI proposal does not necessarily align with or impact the risk classifications under existing medical device regulation. For example, a particular AI-based software used for the purpose of medical diagnosis may be classified as a “high risk AI system” under the EU AI proposal and still be classified as a medium risk medical device (class IIa, class IIb) according to existing  EU medical device regulation.

            In addition, certain AI-based software which falls within the Class I medical device classification, or which does not fall within the medical device regulatory definition, might not be classified as a high-risk AI system but may still have a different classification under the EU AI proposal. 

Medical Devices Regulation, the CE Mark and the EU AI Proposal

            The EU AI proposal explains that the high-risk AI system’s compliance with the proposal's regulatory requirements will also be reviewed as part of the conformity assessment under the existing medical device regulation, and will be CE marked before being placed on the market. Therefore, the CE marking will be an indication of the product’s compliance with medical device regulations as well as with the EU AI proposal’s principles.  However, the EU AI proposal does not fully discuss the interplay between the proposal’s requirements and the regulatory obligations deriving from the EU medical device regulation.

            The EU medical device regulation imposes detailed requirements regarding the development and marketing of medical devices in each stage – design and development, risk management, quality management, post marketing activities and so on. The EU AI proposal specifies the provider’s and user’s obligations in these stages as well. Practically, however, it is unclear how these two sets of regulatory requirements will combine into one regulatory process without overlapping or creating interpretation issues. It seems that further examination and tailoring of regulations are needed in order to implement the proposed AI regulatory requirements properly in the medical field.

Regulatory Requirements for High Risk AI Systems

            The EU AI proposal specifies the following main regulatory requirements for high-risk AI systems:

  • Use of high-quality training, validation, and testing data, which should be subject to measurements to detect bias.
  • Establishment and implementation of quality management systems and risk management procedures.
  • Drawing-up of technical documentation.
  • Formulation of data governance and data management practices.
  • Design of products and systems that will enable effective human oversight.
  • Design and development of logging capabilities to ensure traceability during the product’s lifecycle and enable monitoring of the operation of the high-risk AI system.
  • Ensuring transparency and providing information to users as to how to use the system.
  • Ensuring robustness, accuracy and cyber security. 
  • Undergoing conformity assessment and enabling re-assessment of the system (in case of significant modifications).
  • Registering certain AI systems in the EU database.
  • Affixing CE marking and signing declaration of conformity.
  • Conducting post-market monitoring.
  • Collaborating with market surveillance authorities.

            As mentioned, since existing EU medical regulation includes similar regulatory standards in some of these areas, the implementation of these AI requirements in the medical field might require certain adjustments.

            Finally, the EU AI proposal imposes fines of up to the higher of 30€ million or 6% of a comany's global annual turnover for certain infringements.

            To summarize, the EU AI proposal is an important step in the regulation of AI in the medical field from a global, harmonized perspective. Nonetheless, further examination and regulatory attention are required regarding the interplay between current medical device regulations and proposed AI regulatory requirements.

 

This client update provides general information only, does not constitute a legal opinion and may not be relied upon


[1] Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC (OJ L 117, 5.5.2017, p. 1; Regulation (EU) 2017/746 of the European Parliament and of the Council of 5 April 2017 on in vitro diagnostic medical devices and repealing Directive 98/79/EC and Commission Decision 2010/227/EU (OJ L 117, 5.5.2017, p. 176).

[2] According to the EU AI proposal, reference is to software that is developed with machine learning approaches, logic and knowledge-based approaches and/or statistical approaches, Bayesian estimation, search and optimization methods, and can for a given set of human-defined objectives, generate outputs, such as content, predictions, recommendations, or decisions influencing the environment they interact with.

[3]  The AI risk management classification is as follows:

  1. Unacceptable risks (for example, social scoring and other specific categories listed) - these AI systems are prohibited.
  2. High risk AI systems, such as medical devices– are permitted, subject to the EU AI proposal's requirements, as described herein, including a conformity assessment.
  3. AI with specific transparency obligations- permitted subject to notification and transparency requirement. For example, for the use of bots, it will be required to notify humans that they are interacting with an AI system, unless, this is evident. Another requirement is to notify users if emotional recognition categorization is applied to them.
  4. Minimal or no risk. No mandatory obligations will be applied. There may be voluntary codes of conduct for AI, stressing transparency.

[4] Article 6 of the EU AI proposal determines that an AI system is “high-risk” where it is intended to be used as a safety component for a product, or is itself a product covered by the Union harmonization legislation in Annex II and would be required to undergo a third-party conformity assessment pursuant to that legislation. Annex II includes the EU Regulations on Medical Devices and In Vitro Diagnostic Medical Devices. Accordingly, most software as medical device will undergo a third - party assessment and will fall in the definition of “AI high risk system.”

 

Subscribe for updates and news