EU AI Act and Medical Devices: What SaMD Developers Need to Know (2026)

The EU AI Act (Regulation (EU) 2024/1689) entered into force on 1 August 2024 and is being phased in progressively through 2026 and beyond. For companies developing AI-powered Software as a Medical Device (SaMD), it introduces a second, overlapping regulatory obligation that runs alongside, and interacts with, the existing requirements of EU MDR and IVDR.

This is not a distant compliance horizon. The provisions most relevant to medical device AI became applicable from August 2026. Companies that have not yet assessed their AI systems against the AI Act risk gaps in their technical documentation and conformity processes at exactly the moment Notified Bodies are beginning to incorporate AI Act considerations into their assessments.

This guide explains what the AI Act requires from SaMD developers, how it interacts with MDR and IVDR, and what practical steps manufacturers should be taking now.

For the underlying MDR compliance requirements for SaMD, see: Software as a Medical Device (SaMD): EU MDR Compliance Guide

1. Does the AI Act Apply to Your Software?

The AI Act applies to AI systems placed on the market or put into service in the EU. An AI system is defined as a machine-based system that, given explicit or implicit objectives, infers from inputs how to generate outputs such as predictions, content, recommendations, or decisions that can influence real or virtual environments.

This definition is intentionally broad. It covers:

  • Machine learning models (supervised, unsupervised, reinforcement learning)
  • Deep learning systems including convolutional neural networks used in medical imaging
  • Natural language processing tools used in clinical documentation or decision support
  • Bayesian classifiers and other probabilistic inference systems

It does not cover:

  • Traditional rule-based software with no learning or inference component
  • Software that executes fixed logic without adaptive behaviour

If your SaMD uses any form of machine learning or statistical inference to generate clinical outputs, the AI Act almost certainly applies.

2. High-Risk AI Classification for Medical Devices

The AI Act categorises AI systems by risk level. For medical device manufacturers, the critical category is high-risk AI.

Under Annex III of the AI Act, AI systems intended to be used as safety components of medical devices, or which are themselves medical devices regulated under MDR or IVDR, are automatically classified as high-risk AI.

This means: if your SaMD is a CE-marked medical device or IVD, or is a software component that performs a safety function within one, it is high-risk AI under the AI Act. There is no further classification analysis required, the medical device status determines it.

High-risk AI systems are subject to the full obligations of the AI Act, including:

  • Risk management system: an AI-specific risk management process, documented and integrated with the ISO 14971 risk management already required under MDR
  • Data and data governance: training, validation, and testing datasets must be relevant, representative, free of errors, and sufficiently complete; demographic and geographic representativeness must be documented
  • Technical documentation: a detailed record of the AI system’s design, development process, training methodology, validation approach, and performance characteristics
  • Transparency and instructions for use: users must be provided with clear information about the AI system’s capabilities, limitations, accuracy metrics, and circumstances under which human oversight is required
  • Human oversight: the system must be designed to allow human oversight and intervention; it must not undermine the ability of the operator or user to override, disregard, or reverse outputs
  • Accuracy, robustness, and cybersecurity: performance must be declared and validated; the system must be resilient to errors, faults, and adversarial manipulation
  • Conformity assessment: high-risk AI systems must undergo a conformity assessment before being placed on the market

3. How the AI Act Interacts with MDR and IVDR

This is where the compliance picture becomes complex, and where early planning pays off.

The AI Act does not replace MDR or IVDR. Both regulatory frameworks apply simultaneously to AI-powered SaMD. However, the EU has designed a streamlined pathway for medical devices that are already subject to Notified Body review under MDR or IVDR.

Under Article 11 and Annex II of the AI Act, AI systems that are regulated as medical devices benefit from a single technical documentation approach meaning the AI Act technical documentation requirements can be integrated into the existing MDR/IVDR technical file rather than creating a separate document set.

Similarly, for Class IIb and III medical devices (MDR) and Class C and D IVDs (IVDR) which are the most likely to contain high-risk AI the Notified Body involvement already required under MDR/IVDR can cover the AI Act conformity assessment. The Notified Body acts as the relevant conformity assessment body for both frameworks.

In practice this means:

What changes for AI-powered SaMD under the AI Act:

  • Technical documentation must now explicitly address AI-specific elements: training data governance, model validation across subgroups, bias assessment, explainability approach, and human oversight mechanisms
  • Post-market monitoring must include AI performance monitoring tracking model drift, accuracy degradation over time, and distribution shift in real-world data
  • Transparency obligations require new IFU content describing AI limitations and human oversight requirements
  • A fundamental rights impact assessment may be required for certain high-risk AI applications in healthcare

What does not change:

  • The MDR/IVDR conformity assessment route remains the primary pathway
  • The Notified Body relationship established for MDR/IVDR CE marking remains the relevant body
  • ISO 14971 risk management, IEC 62304 lifecycle management, and clinical evaluation requirements are unchanged AI Act risk management is additive, not a replacement

4. General Purpose AI (GPAI) Models in Medical Devices

A separate and increasingly relevant category is General Purpose AI (GPAI) large foundation models or multimodal AI systems that can be adapted or fine-tuned for specific applications.

If a SaMD developer is building on top of a GPAI model: for example, fine-tuning a large language model for clinical documentation, or adapting a vision foundation model for medical image analysis both the GPAI model provider and the SaMD developer have obligations under the AI Act.

GPAI model providers must publish technical documentation and comply with copyright and transparency requirements. SaMD developers who deploy or fine-tune GPAI models are responsible for ensuring the resulting system meets all high-risk AI obligations, including data governance, validation, and clinical performance claims. The validation methodology for fine-tuned GPAI models in medical contexts is an area where regulatory guidance is still developing, early engagement with your Notified Body is strongly recommended.

5. Key Timelines

August 2024: AI Act enters into force.

February 2025: Prohibitions on unacceptable-risk AI systems apply. Not directly relevant for medical SaMD, but important for any AI used in patient-facing administrative processes.

August 2025: GPAI model obligations apply. SaMD developers building on foundation models must assess their exposure now.

August 2026: High-risk AI obligations fully apply. This is the key deadline for medical device AI. From this date, new AI-powered SaMD placed on the EU market must comply with all high-risk AI requirements.

Post-2026: Notified Bodies designated under the AI Act will begin conducting AI Act-specific conformity assessments. The intersection with MDR/IVDR NB assessments will become a standard part of the conformity process.

6. What to Do Now: A Practical Checklist

Classify your AI systems. Identify every AI component in your SaMD portfolio and confirm whether it meets the EU’s definition of an AI system. For each, document the risk classification and the rationale.

Assess your technical documentation gaps. Review your existing MDR/IVDR technical files against the AI Act Annex IV requirements. Identify where AI-specific content, training data documentation, bias assessment, explainability approach, is missing or insufficient.

Review your data governance. The AI Act’s requirements for training data representativeness and bias documentation are more explicit than anything in MDR. If your training data governance is not documented at the level the AI Act requires, this is a gap that needs addressing before your next Notified Body audit.

Update your IFU and labelling. Transparency obligations mean users must be explicitly informed about AI limitations, performance metrics across relevant subgroups, and circumstances requiring human override. Most current SaMD IFUs are not written to this standard.

Engage your Notified Body. Ask your NB directly how they are approaching AI Act integration into MDR/IVDR assessments. Different NBs are at different stages of readiness, and early clarity on what they will expect prevents last-minute documentation gaps.

Build AI performance monitoring into your PMS. Post-market surveillance for AI-powered SaMD must now track model performance over time. If your PMS plan does not include AI-specific monitoring metrics, update it before August 2026.

Further Reading

Written by:
Diego Rodrigues, PhD

Diego Rodrigues, PhD

RA Specialist

Regulatory affairs specialist & CRA with expertise in EU MDR/IVDR, CE marking, Biological Evaluations (dental), and clinical investigations & technical documentation for MDs & IVDs.
Industry Insights & Regulatory Updates