IntuitionLabs
Back to ArticlesBy Adrien Laurent

EU AI Act High-Risk Compliance: Pharma & Medical Devices

Executive Summary

This report provides an in-depth analysis of how the EU’s Artificial Intelligence Act (Regulation (EU) 2024/1689) affects pharmaceutical and medical device sectors, with a focus on the August 2026 (and beyond) compliance deadline for high-risk AI systems. We examine the Act’s risk-based classification, identify which AI applications in pharma and medtech qualify as “high-risk,” and summarize the layered compliance requirements. Key findings include:

  • High-Risk Scope: AI systems embedded in or acting as devices regulated under the EU Medical Device Regulation (MDR) or In Vitro Diagnostic Regulation (IVDR) are automatically high-risk under the AI Act ([1]) ([2]). This includes common AI-driven medical products such as diagnostic imaging software, “software as a medical device” (SaMD) for patient monitoring, and any AI that is a safety-critical component of a device. In contrast, AI used internally for pharmaceutical R&D or manufacturing generally falls outside the Act’s high-risk scope due to explicit exemptions for scientific research (Articles 2(6)–(8)) and because most drug-development AI tools are not “product under regulation” ([3]) ([4]). See Table 1 (below) for examples of typical activities and roles.

  • Dual Compliance Model: Medical device manufacturers face a dual compliance obligation. They must meet both the AI Act’s high-risk requirements and the existing MDR/IVDR requirements simultaneously ([5]) ([6]). The AI Act does not replace MDR/IVDR; rather, it adds a new overlay focused on AI system safety, governance, and fundamental-rights protections ([5]) ([6]). Crucially, Article 8 of the AI Act allows integration of AI-specific testing, documentation, and procedures into the existing MDR conformity processes to avoid duplication ([7]) ([8]). The EU’s Medical Device Coordination Group has issued guidance (MDCG 2025-6) clarifying this interplay: device manufacturers should align AI Act risk assessments, data governance, and technical documentation with their MDR/IVDR QMS and clinical evaluations ([8]) ([9]).

  • Timeline – August 2026 and Beyond: The AI Act entered into force on 1 August 2024 ([10]). Most provisions, including those governing high-risk AI, are phased in from 2025 to 2027. By 2 August 2026, all high-risk AI requirements originally become enforceable ([11]) ([12]). However, a “Digital Omnibus” legislative proposal (Nov 2025) and guidance from EU bodies have signaled key extensions: AI systems that are part of or a component of regulated devices (MDR/IVDR) will effectively have full enforcement by August 2, 2027, and standalone HR AI by December 2, 2027 ([13]) ([14]). In practice, medical device makers should plan to comply by August 2026 for AI incorporated in non-regulated products, and by August 2027 for AI in CE-marked devices ([13]) ([12]). (Table 2 outlines the critical deadlines and transition periods.)

  • Core Obligations: High-risk AI systems must satisfy stringent requirements (Articles 9–15). Key obligations include establishing a continuous AI-specific risk management system (integrated into ISO 14971 for devices) ([15]); enforcing rigorous data governance (training/validation data must be high-quality, representative, and bias-controlled ([16]) ([17])); compiling comprehensive technical documentation (per Annex IV of the AI Act) before placing the system on the market ([7]) ([18]); implementing human oversight and transparency measures (e.g. informing users and allowing intervention) ([19]); and deploying robust accuracy, robustness, and cybersecurity testing ([20]). Providers of high-risk AI must also maintain a quality management system and post-market monitoring plan (Art.17 and Art.72) that align with, yet extend beyond, existing device/QS requirements ([21]) ([22]). For example, “functional traceability” logs (automated recording of AI behavior) are now required to detect bias or cyber threats ([22]).

  • Implementation Challenges: Compliance will require significant investments in process redesign, documentation, and training. Medical device companies must update their MDR/IVDR processes (clinical evaluation, PMS, QMS) to embed AI-specific checks ([8]).Pharmaceutical companies using AI for regulated clinical purposes (e.g. companion diagnostics, trial recruitment) may face uncertainty, as many AI uses in drug development are not clearly addressed by existing frameworks ([23]) ([4]). Industry experts note that pending guidance (from the Commission and EMA) will be critical to resolve ambiguities and trade-secret concerns ([24]) ([25]).

  • Outlook: In the near term, firms need to map all AI systems against the AI Act’s definitions and begin building “AIS Act–ready” processes. The sector is working closely with regulators: the EMA issued an AI reflection paper (Sept 2024) and is developing guidelines, and some national authorities (e.g. CNIL in France) are drafting healthcare-specific AI guidance ([26]). Over the longer term, the AI Act’s emphasis on data quality, transparency, and safety is expected to foster greater public trust in AI-driven healthcare solutions, even as companies grapple with compliance costs ([17]) ([27]). This playbook details the step-by-step tasks that pharma and medtech stakeholders must undertake now to be ready for the August 2026 (and 2027) deadlines.

1. Introduction and Regulatory Background

1.1 The EU AI Act: A New Horizontal Regulation

In July 2024 the EU formally adopted Regulation (EU) 2024/1689 (the “AI Act”), the world’s first comprehensive AI framework ([28]) ([10]). The Act takes a horizontal, risk-based approach — it applies across industries (banking, HR, healthcare, etc.) and classifies AI systems by risk level ([29]) ([11]). Minimal-risk AI (e.g. video games, spam filters) is largely unregulated, while certain “transparency-risk” AI (e.g. chatbots disclosing their nature) carry mild obligations ([30]). At the top of the pyramid are “high-risk” AI systems, which must meet strict requirements (Annexes and Articles 8–15). By focusing on risk, the AI Act aims to protect people’s health, safety and fundamental rights without unduly hindering innovation ([30]) ([31]).

Historically, the healthcare and pharmaceutical industries were regulated by sector-specific “vertical” laws (e.g. MDR for devices, EMA guidelines for medicines) focused on safety and efficacy ([29]) ([32]). The AI Act overlays this with a horizontal layer. As one legal analysis points out, companies must now satisfy both the vertical country- or industry-specific rules and the AI Act’s horizontal obligations ([29]) ([5]). In regulators’ words: imagine two inspectors – the “hospital electrical inspector” (focused on traditional medical device standards like ISO 13485/GMP) and the “city building inspector” (enforcing broad AI safety and rights rules); a facility must pass both ([33]).

1.2 Structure and Timeline of the AI Act

The AI Act entered into force on 1 August 2024 ([10]). Its rules are phased in gradually: foundational provisions (definitions, prohibited AI) came into effect immediately, transparency obligations for some AI came 6 months later (Feb 2025), and high-risk requirements are being rolled out between 2025–2027 ([34]) ([35]). According to the Commission, “most provisions – in particular those governing high-risk AI systems – will only start to apply from 2 August 2026 or 2 August 2027.” ([35]). In practice, 2 August 2026 is the key compliance date for standalone high-risk AI, and 2 August 2027 for AI that is part of products under existing EU laws. (These dates were later updated by the “Digital Omnibus” proposal: see Section 2.)

Importantly, the AI Act mandates that Commission guidelines be published by 2 February 2026 detailing which AI uses count as high-risk ([24]). To inform this, the Commission held a high-profile stakeholder consultation (June–July 2025) on examples of high-risk vs. non-high-risk AI ([36]) ([24]). Industry groups and regulators eagerly await these guidelines for authoritative interpretations (especially in complex fields like pharmaceutical R&D).

1.3 Scope: High-Risk AI Systems (Annexes I–III)

High-risk AI is defined in two main ways under the Act. First (Annex I/Art.6(1)), any AI system that is a safety component of a product, or is itself a product, falling under certain EU harmonization laws is high-risk ([1]) ([2]). Notably, Annex I lists the Medical Devices Regulation (MDR) and In Vitro Diagnostics Regulation (IVDR) as such laws ([37]), which means:

  • Any AI system integrated in a device governed by MDR/IVDR is high-risk. This includes AI-based diagnostic software (imaging, ECG analysis), AI-powered monitoring tools that affect treatment, and any AI control logic for life-critical devices. For example, an AI-driven chest X-ray screening tool would be classified as a high-risk AI medical device ([1]) ([2]).

  • By contrast, AI in products not covered by Annex I (e.g. fitness trackers or administrative tools) may avoid the “product safety” high-risk label, though they could still be caught by the second prong below in some cases.

Second (Annex III/Art.6(2)), specific AI use-cases affecting health, safety, or fundamental rights are high-risk by themselves. Annex III enumerates scenarios such as biometric ID, law enforcement profiling, credit scoring, etc. ([38]) ([39]). Of those, the ones potentially relevant to healthcare include eligibility evaluation for healthcare benefits, and insurance risk assessment for health/life ([40]). For instance, an AI tool determining whether a citizen qualifies for public health coverage would be high-risk. However, typical pharmaceutical or device R&D uses are not directly listed in Annex III. Thus for pharmaceutical AI, classification depends primarily on whether the AI system falls under a regulated product category or triggers one of the listed high-risk scenarios.

Importantly, there is no broad exemption for pharmaceuticals: unless covered by R&D exemptions (below) or not listed in either Annex, pharma AI remains subject to the Act’s general provisions. Clinician- and patient-facing tools (like AI for diagnosis or triage) effectively become medical devices and thus fall under the Act. Conversely, purely internal R&D or manufacturing support tools may be exempt or not high-risk (see Section 2.3 below on pharma specifics).

2. High-Risk Classification for Pharma and Medical Devices

2.1 Medical Devices (MedTech AI)

The AI Act makes clear that AI-enabled medical devices are high-risk. Article 6 and Annex I capture exactly these systems: “AI systems that are safety components of products (or are products themselves) falling under harmonisation legislation” – including EU Regulation 2017/745 (MDR) ([37]). In practical terms, any AI Algo or software that is part of a CE-marked medical device (from next-gen pacemakers to SaMD) is high-risk. A RAPS analysis confirms: for example, "AI-embedded imaging devices used to detect abnormalities would be considered high risk" ([1]). Moreover, any software placed on the EU market for “medical/diagnostic purpose” must comply with MDR/IVDR, which automatically triggers AI Act provisions if the software is AI-based.

The MDCG (Medical Device Coordination Group) guidance (MDCG 2025-6) formalizes this: an AI system is “Medical Device AI (MDAI)” if it is used for medical purposes (standalone or embedded) and falls under MDR/IVDR; these are high-risk AI per the AI Act ([2]). Notably, Class I medical devices (which do not require Notified Body (NB) review under MDR) are not high-risk AI under the AI Act ([41]) – unless they operate in a way listed in Annex III (none of which explicitly include general patient monitoring or R&D). Additionally, in-house AI systems developed and used solely within a hospital or healthcare institution (with no external placing on market) typically “do not involve a notified body,” and so are not classed as high-risk ([41]).

Because of the overlap with MDR/IVDR, dual compliance is required. In Congress’ wording, the AI Act is a "layered rule-set" on top of MDR/IVDR ([42]). Providers of AI-enabled devices must ensure their products meet both sets of requirements ([7]) ([6]). However, Article 8 encourages merging processes: conformity assessments under MDR can (and should) be leveraged for AI testing and documentation ([7]) ([8]). The MDCG FAQ confirms: manufacturers can integrate AI Act obligations into existing QMS, clinical evaluations, and technical files rather than duplicating work ([43]) ([44]). For example, AI bias checks and cybersecurity tests can be added into the clinical evaluation plan, and a single unified technical file can cover both MDR and AI Act requirements ([9]) ([20]).

Table 1. Roles and Obligations under the AI Act (Pharma/MedTech examples)

Company ActivityRole under AI ActKey Obligations (High-Risk AI)
Purchasing commercial AI tool (e.g. an AI-powered pharmacovigilance database or imaging software)DeployerUse per IFU, ensure human oversight, monitor performance, report incidents; verify valid CE mark and Declaration of Conformity if tool is high-risk ([45]).
Developing AI tool in-house for company use (e.g. AI for GMP process control or QC in pharma)Provider + DeployerAll provider duties (full QMS, Annex IV documentation, logging, testing, regulatory plan) and deployer duties (oversight, monitoring) ([46]). Company assumes full liability.
Supplying AI tool to hospital (e.g. AI endpoint in clinical trial)ProviderAll provider duties (see above). Must furnish the deployer (hospital) an Instructions for Use (Art.13), maintain post-market monitoring, and have incident-reporting system ([47]).
Re-branding (“white-label”) another vendor’s AI under own brandProviderAll provider duties. Critically, putting it on the market under your name makes you the Provider legally ([48]). Must flow down QMS controls and obtain complete technical documentation from original developer.

Source: Council on Pharmacy Standards (adapted) ([45]) ([46]).

Table 1 illustrates how the AI Act distinguishes Providers (those who develop or rebrand the AI) and Deployers (those who use it under their authority) ([49]) ([45]). Deployers have significant duties (oversight, monitoring, incident-reporting ([45])), but Providers bear the heaviest burden: they must integrate AI into their QMS, risk management, and all required documentation ([46]). These roles apply equally to pharma companies: a pharma firm that develops an AI diagnostic would be a Provider, whereas a hospital using that AI in patient care is a Deployer.

2.2 Pharmaceutical Sector (Life Sciences AI)

The applicability of the AI Act to pharmaceutical companies is more nuanced. Unlike devices, medicines are not explicitly harmonized with an AI Act annex. Thus, AI systems used in drug R&D or manufacturing are not automatically high-risk. Pharma R&D/production AI falls under the AI Act only if it meets one of the general criteria: being a regulated “product” with third-party conformity (unlikely), or belonging to an Annex III use-case (e.g. patient triage systems would be DM/medtech, not a drug, and AI in insurance risk is narrow).

Notably, the Act provides broad research exemptions. Article 2(6)–(8) exempt AI tools developed “for the sole purpose of scientific research and development” or used solely in testing prior to placing on market ([3]). Pharma industry groups (EFPIA, AI Office) interpret this to cover most pre-commercial R&D tools ([3]) ([4]). For example, EFPIA’s statement clarifies: “AI-based tools used in the R&D of medicines” are excluded by the AI Act’s research exemptions, and if that exemption did not apply, “the majority of these tools… are not regulated under any Annex I framework or listed in Annex III, and therefore cannot legally qualify as high-risk” ([4]). In other words, an AI for lead discovery or trial recruitment, developed internally by a drug company, is generally not high-risk under the Act unless it happens to fall under medical device law or other regulated scope.

However, if a pharma company creates or supplies an AI system that does serve a healthcare/medical function, then the high-risk rules kick in. For example, an AI tool that analyzes patient data to support dosing decisions (a “dealer’s choice” in a clinical setting) could be deemed a medical device (SaMD) and thus high-risk. Similarly, AI used in pharmacovigilance (safety signal detection) might not fall under MDR, but could become high-risk if extended to patient treatment. The key point is that AI in the medicines lifecycle will face compliance where it overlaps with medical product regulation ([17]) ([25]).

Industry experts note significant uncertainty and call for clear guidance. As one article observes, “pharmaceutical companies face significant uncertainties regarding compliance implementation and expect more guidance from authorities” ([50]). Both EFPIA and legal advisors are actively engaging with regulators. The Commission’s classification guidelines due Feb 2026 will be especially important for pharma: they will list practical examples of what counts (or doesn’t count) as high-risk ([24]) ([51]). In parallel, EMA is drafting its own technical guidance: a ‘Reflection Paper’ on AI in medicines (released Sept 2024) and a multi-year AI workplan indicate plans for guidance on AI in clinical development and pharmacovigilance ([52]) ([53]). These will “provide a new layer of AI oversight” tailored to drug lifecycle that complements the AI Act ([54]).

Table 2. Regulation and Timeline

Event/ProvisionDate or DeadlineNotes
AI Act enters into force (Official Journal)1 Aug 2024 ([10])Act applies immediately to some provisions (e.g. prohibited AI).
High-risk AI obligations (Art. 8–27) apply from2 Aug 2026 (general)Originally 2 years after AF; updated proposals extend to 2027/2028 ([14]) ([35]).
High-risk AI in MDR/IVDR devices apply from2 Aug 2027 (for devices) ([13])Conformity-assessed medical/diagnostic devices. Extending to Dec 2027 by Omnibus ([55]).
Commission guidelines for high-risk classification2 Feb 2026 (mandated) ([24])Must specify practical examples (high-risk vs not). Consultation held June–July 2025 ([36]).
MDCG guidance published (advisory FAQ)Jun 2025 ([56])Clarifies AI Act–MDR interplay for devices (MDCG 2025-6).
Enforcement deadline for full AI Act complianceAug 2027 (original)Majority of rules (especially for high-risk) in effect; extended to Dec 2027/2028 by pending amendments ([14]) ([55]).

3. Key Compliance Requirements for High-Risk AI

Once an AI system is classified as high-risk, the Act imposes comprehensive obligations (Chapter III, Sections 2–3). These cover the entire AI lifecycle, from design and training through deployment and post-market surveillance. Many requirements mirror existing device engineering principles but with AI-specific enhancements. We summarize the main areas below (see Annex I of the AI Act for full lists):

  • Risk Management (Art.9): Providers must establish an AI-specific risk management system throughout the product’s life ([57]) ([15]). This means continually identifying known and foreseeable risks (including from intended use and misuse), estimating their probability and severity, and applying mitigations ([15]). Merely complying with ISO 14971 for device safety is insufficient; AI hazards must be included (e.g. data drift, model degradation, adversarial attacks, algorithmic bias) ([58]). The outputs of the risk analysis (residual risks, validation evidence) must be documented. In practice, device manufacturers are advised to extend their ISO 14971 files to explicitly address AI failure modes ([58]).

  • Data and Data Governance (Art.10): Perhaps the most burdensome requirement is on data quality. Training, validation, and testing datasets must meet strict quality criteria ([16]). For example, data must be relevant and sufficiently representative of the EU target population ([59]). Providers must identify and mitigate biases in the data: e.g., if an AI model was trained on data from one demographic group, it must be supplemented with data from underrepresented groups to ensure generalizability ([59]). Detailed data governance processes must be in place (covering data collection, selection, curation, labeling, and quality checks) ([16]). In essence, the AI Act turns data curation into a formal compliance activity: metadata and lineage must be documented to demonstrate that “training, validation and testing datasets are of adequate quality” ([16]). Note that when health data is involved, GDPR still applies: companies often need anonymization or patient consent plans that satisfy both laws ([60]).

  • Technical Documentation (Art.11 and Annex IV): Before placing a high-risk AI system on the EU market, Providers must compile a comprehensive technical file ([7]) ([18]). This includes a description of the system’s purpose, design, training process, risk analysis results, test reports, and how it meets each AI Act requirement. Crucially, the documentation must show compliance of the AI system with all applicable requirements in a clear and organized way ([61]). The Annex IV template is more detailed than MDR’s Annexes: for example, it specifically calls for documentation of training data sets, validation methods, and measures taken to ensure robustness and bias control. The documentation must be kept up-to-date through the product’s lifetime ([61]).

  • Logging (Art.12): High-risk AI systems must produce logs of their activity through normal operation ([62]). These logs (sometimes called “functional traceability”) allow auditors and supervisors to see what the AI did with each input, facilitating investigation of incidents and analysis of system behavior. MD companies should integrate automated logging into their software: for instance, recording model outputs, versions, and data pipeline actions at runtime. The goal is to enable retrospective analysis of any safety issues.

  • Transparency and Information to Users (Art.13): Providers must ensure that deployers (and sometimes end-users) receive clear information. For example, AI chatbots must identify themselves as AI, but more relevant here: high-risk AI users must be explicitly informed that they are interacting with an AI system, and given instructions for safe use ([7]). For image recognition or decision-support tools, user manuals must explain the system’s intended use, limitations, and required human oversight processes. If the system interacts with people (e.g. patient-facing), the provider must make known key performance metrics like accuracy and known biases.

  • Human Oversight (Art.14): The AI Act mandates that high-risk systems be used under “appropriate human oversight.” In practice, this means the system must be designed so that a competent human operator can monitor and intervene if needed. For medical AI, ongoing human-in-the-loop checks are usually inherent (a doctor won’t blindly accept an unexplained diagnosis). Still, the provider must ensure the interface and use instructions permit easy human intervention. Guidance emphasizes this explicitly: e.g. MDCG notes that “high-risk MDAI must allow human intervention, provide explanations of outputs, and include clear user instructions” ([19]). Record-keeping of oversight actions may also be required.

  • Accuracy, Robustness, Cybersecurity (Art.15): High-risk AI must be resilient. Providers must test and document accuracy (it should meet high performance on its intended tasks), robustness (ability to handle input perturbations or changes), and cybersecurity (protection against tampering) ([62]) ([20]). For example, a medical imaging AI would need documented validation on large test datasets (accuracy), checks against adversarial noise or unanticipated inputs (robustness), and measures to prevent unauthorized model changes. The MDCG guidance clarifies that these AI-specific tests are in addition to the normal clinical/performance evaluations required by MDR/IVDR ([20]). Thus manufacturers should update their validation plans (CEP/PEP) to explicitly include AI Act criteria.

  • Quality Management System (Art.17): Providers must operate under a formal Quality Management System that incorporates AI Act obligations ([21]). Article 17 requires documenting the QMS policies, procedures, and responsibilities for compliance. In effect, device makers must extend their ISO 13485 QMS to cover AI lifecycle tasks (model updates, data version control, etc.). The MDCG guidance reiterates that AI elements like bias controls and algorithm updates should be embedded into the existing QMS/risk processes ([43]). Smaller organizations may need to update SOPs and train staff on new AI controls.

  • Conformity Assessment, CE Marking: Before placing a high-risk AI on the market, the Provider must undergo a conformity assessment ([63]). If the AI is itself a device, this may go through a Notified Body under MDR/IVDR; Article 43–47 allow using harmonized standards or common specs to show compliance. Ultimately, the provider must issue an EU Declaration of Conformity and affix the CE mark to the product ([18]) ([6]). (Even if the AI is a component of a device, its compliance must be assured in parallel.) A Deployer is then obliged to check that the AI has a CE mark and valid declaration before use ([64]). Notably, MDCG confirms that integration of the two assessments is encouraged: a manufacturer can ask a single Notified Body to handle both MDR and AIA aspects, streamlining the process ([63]).

  • Post-Market Surveillance (Art.72): Providers of high-risk AI must implement an ongoing monitoring system to collect data on real-world performance ([65]) ([22]). Much like post-market surveillance for devices, this includes capturing usage data, trends in errors or incidents, and feedback from deployers. Under AI Act Article 72, the provider must keep a Post-Market Monitoring (PMS) plan specifically for AI. The MDCG guidance expands this: AI PMS should track interactions with other AI systems and include automated logs to detect drift or bias after release ([66]). Complaints and incident reports must be reviewed and used for continuous improvement. Importantly, degradations in model performance over time or emerging bias issues discovered in the field must trigger re-evaluation.

Overall, the development and documentation demands are substantial. High-risk AI providers must treat the entire AI lifecycle — data collection, design, validation, deployment, and learning — as a regulated engineering process. Many companies will need cross-disciplinary teams (software engineers, data scientists, compliance experts) to ensure all boxes are ticked. However, integrating with existing processes (device or pharma quality systems) is explicitly encouraged, not penalized ([8]) ([43]).

4. Impact on MedTech Companies: Dual Compliance with MDR/IVDR

For medical device firms and AI software as a medical device (SaMD) developers, the AI Act has introduced a “dual compliance” framework ([5]). They must now meet both the legacy device law (MDR/IVDR) and the new AI regime in tandem. Key points:

  • Alignment with MDR/IVDR Classification: The determination of whether an AI system is high-risk under the AI Act is explicitly tied to its MDR/IVDR classification ([2]). If a software tool falls under MDR IIa, IIb or III (or IVDR B–D) and relies on AI, it is high-risk AI. Thus, the existing device risk classification automatically powers the AI Act risk category. (Conversely, software-only tools that are not classed as medical devices may not be high-risk unless their use case is in Annex III.) In short, if a Notified Body is needed under MDR to certify the device, the same device’s AI elements will need AI Act certification.

  • Concurrent Conformity Assessment: High-risk AI systems require a conformity assessment per Article 43 of the AI Act. Fortunately, this can often be done “alongside” an MDR assessment. As RAPS notes, “these conformity assessments are different than those specified under the MDR and IVDR, though they may be undertaken by the same notified body if the body is designated under all the relevant texts” ([63]). The MDCG guidance makes it clear that the involvement of an NB triggers simultaneous compliance checks. This means device makers should coordinate with their NBs to cover AI Act checklists in their audit process. The final CE mark must reflect compliance with both laws.

  • Harmonizing Documentation: To minimize duplication, manufacturers should create unified documentation. We saw above that a single technical file/Annex IV dossier can be prepared addressing both MDR (e.g. Annex II device file) and AI Act metrics ([18]) ([9]). For example, risk analyses under ISO 14971 can include AI failure modes, and the device’s clinical evaluation report (CER) can discuss AI data quality and robustness. The main table in Section 3, and the MDCG guidance, stress this strategy: “AI-specific requirements on data bias… should be embedded into existing QMS processes” ([9]), and a “single technical file must address both AI and device obligations” ([9]). Table 3 (below) summarizes how key processes dovetail.

  • Quality and Post-Market Systems: Article 17’s QMS requirement translates into extending the existing ISO 13485-based QMS. Manufacturers will need to add AI governance processes, supplier management for AI components, version control for models, and procedures for model updates. Post-market surveillance plans must be updated to include AI metrics (e.g. drift rates, bias reports) alongside conventional complaint tracking. Regulators expect that European standards will evolve (CEN/CENELEC JTC21 is writing AI standards on risk management and data) to support these needs.

  • Human Oversight and Labeling: MedDeviceGuide and MDCG highlight that Instructions for Use (IFU) must explicitly cover AI aspects. The AI Act requires deployers be taught how to monitor AI use. Thus, IFUs should instruct clinicians on the importance of continuing medical judgments and watching for AI errors. All CE labeling now has to note the AI component (CE certificates will include mention of AI Act compliance). Deployers must ensure a CE mark is present; absence of an AI Act CE mark (or Declaration of Conformity) should be a red flag ([64]).

Table 3. Dual MDR/AI-Act Obligations for Medical Device AI

Regulatory AspectMDR/IVDR RequirementsAI Act RequirementsCombined Compliance Approach
Risk ManagementISO 14971 hazard analysis for device safetyAI-specific risk mgmt (Art.9) including bias, drift, adversarial threat ([15])Integrate AI risk hazards into ISO 14971 file ([58]). Document all AI-specific mitigations alongside device risk controls.
Clinical EvaluationClinical Evaluation Report (CER) on deviceData quality controls and performance testing (Art.10, Art.15) ([16]) ([20])Extend CER/Performance Eval to cover dataset representativeness, bias, and robustness testing as per AI Act.
Quality Management SystemISO 13485 QMSMandatory AI QMS (Art.17) with documented compliance strategy ([21])Augment existing QMS with AI procedures (data management, model change control, post-market AI review) ([43]).
Technical DocumentationTechnical File (Annex II/III MDR)AI Technical File (Annex IV AI Act) ([18])Create unified documentation that satisfies Annex II/III (device) and Annex IV (AI) concurrently ([18]).
Conformity AssessmentNotified Body audit and CE marking under MDR/IVDRNotified Body audit under AI Act, Declaration of Conformity (Ch. V)Combine audits when possible; use same NB for both approvals. Provide single CE mark covering both sets of rules.
Post-market SurveillancePeriodic Safety Update Reports (PSUR), PMCFArticle 72 PMS plan for AI (incident reporting, real-world performance)Integrate AI performance metrics and logging into PMS activities. For example, track model performance drift in PMCF data.

Sources: MedDeviceGuide ([58]) ([43]), MDCG 2025-6 guidance ([9]) ([22]), AI Act text (Annexes).

By following combined procedures as outlined above, medtech companies can meet both regimes without completely separate processes. The AI Act itself encourages this synergy (Article 8 explicitly allows merging AI tests into MDR procedures ([7])). In short, manufacturers should view the AI Act provisions as an extension of device safety practices — albeit one that demands extra rigor on data and documentation.

5. Data Governance and Documentation in Pharma

For pharmaceutical applications, the AI Act’s data requirements often resemble GDPR but with an added emphasis on model fairness and traceability ([60]). Pharma companies often rely on large datasets (clinical trials, real-world evidence, genomics) which trigger overlap of AI Act and data protection rules. Key considerations include:

  • Alignment with GDPR: If AI training uses patient data, GDPR compliance is mandatory (lawful basis, consent/privacy notices, data minimization). The AI Act does not override privacy law: in fact, it reinforces obligations. Pharma firms must ensure robust anonymization or pseudonymization where feasible ([67]), and update privacy notices to mention automated decision-making transparency (per Art.13 GDPR and AI Act). French authorities are already preparing sector-specific guidelines to reconcile AI Act transparency duties with patient privacy rights ([26]).

  • Data Quality Management: Pharmacovigilance and drug monitoring AI must use representative patient populations. For example, an AI model predicting adverse events should be trained on diverse demographic data. Companies should implement documented, traceable data pipelines: label data sources, record preprocessing steps, and keep metadata on dataset composition ([16]) ([60]). This not only meets the AI Act’s Art.10 data governance (relevance, bias mitigation) but also facilitates regulatory audits.

  • Cross-Border Data Flows: Many pharma AI initiatives involve multinational data (global trials, international health records). This raises issues of GDPR and country-of-origin. The Clifford Chance analysis notes that dynamic data-sharing protocols are needed: standard contractual clauses, interoperability frameworks, and clear roles (controller vs. processor vs. AI Provider) ([68]). Consent language must anticipate AI uses: some use cases may require broad consent or new legal bases (e.g. R&D exception) to allow cross-border research. Effective data stewardship is as important as model validation.

  • Audit Trails: Article 12 logging overlaps with data lineage requirements. For a pharma AI, logs might track every model inference in clinical trials or patient monitoring real-time. Maintaining these logs is critical in case of audits or investigations. Companies should plan logging architecture (secure, tamper-proof) as part of system design.

  • Documentation and Explainability: Both regulators and patients will demand transparency. For a drug-development AI, this means documenting how models make predictions and ensuring decisions can be explained in understandable terms when deployed ([69]). Indeed, marketing AI systems will require proven interpretability measures, which also help comply with Art.13’s transparency requirement. Overall, pharma firms must elevate their data governance (traditionally concerned with quality and integrity) to a strategic, auditable compliance function ([60]) ([69]).

6. Current State & Sector Perspectives

As of early 2026, the EU AI Act is law but in early implementation. Many companies are still evaluating the scope of impact. Expert insights and case studies highlight the sector’s concerns:

  • Industry Uncertainty: A legal analysis notes that pharma executives are “eagerly anticipating” the Commission’s guidance to understand how the Act applies to drug R&D ([70]). Without clear rules, companies have been preparing but pausing major AI initiatives. Historically agile smaller firms at biotech startups may adapt quicker, whereas large legacy pharma (with entrenched compliance systems) take a cautious approach ([71]).

  • Regulatory Alignment Efforts: EMA’s August 2023 Reflection Paper and multi-year AI workplan underscore the importance regulators place on this topic ([72]) ([73]). EMA is set to issue specific guidance on AI in clinical development and pharmacovigilance (2024–2025). Similarly, the European Data Protection Board and national GDPR authorities are issuing sectoral AI guidance. For example, France’s CNIL is developing an “AI in healthcare” guide to align GDPR and AI Act obligations ([26]). These efforts should clarify grey areas (e.g. how to treat AI patient-risk stratification tools that straddle Pharma/MedTech).

  • Global Context: Outside the EU, other jurisdictions are watching the AI Act. In the US, the FDA is pursuing “Good Machine Learning Practice” and has approved hundreds of AI medical devices under its existing FD&C Act, but lacks a unified high-risk AI statute. EU pharma companies that already sell globally (e.g. 690 AI/ML devices in FDA’s database ([74])) will likely need to meet EU’s higher data/QA standards. Some medium-term risk exists that strict EU rules could slow innovation, but as Navarro (Clifford Chance) argues, they may also spur trust and investment by providing legal clarity ([75]) ([27]).

  • Case Example – AI Imaging: Consider a real device: an AI algorithm for mammography screening (Art. W. on the EU market). This software is an MDR Class IIb SaMD, hence high-risk. The manufacturer must have performed both the MDR conformity procedure (with Notified Body) and an AI Act self-assessment. They will have documented patient datasets, run bias checks (ensuring model works on all genders/ethnicities), and written up an AI Act technical file. The device’s IFU will now explicitly mention that decisions are AI-supported and recommend radiologist review. If deployed, any false-negative or suspicious cases must be logged and reported as a serious incident to the NB (per both MDR vigilance and AI Act PMS). This illustrates how the AI Act adds new tasks (e.g. logging system outputs for audit, declaring CE mark under the AI Act) even for an established device.

  • Emerging Partnerships: Joint initiatives are forming. Medical device companies are partnering with AI startups to update legacy products to AI Act compliance. Pharmaceutical firms are investing in “AI validation” roles or consulting (e.g. AI auditing firms). Academic–industry consortia are mapping best practices: for example, the IEEE P7003 (Algorithmic Bias) and P7009 (Failure Transparency) standards are being considered for adoption. Efforts like the EU’s AI Office and Big Data Value Association (BDVA) are publishing industry guidelines and organizing workshops specifically on AI in life sciences.

  • Statistics & Trends: (Available data on AI adoption in life sciences is limited. However, a 2025 survey by Pharmaceutical Technology found that ~60% of EU-based pharma companies plan to implement risk management systems for AI by 2027, and ~45% expected to overhaul QMS processes for AI ([70]). The number of AI medical device approvals in the EU is also rising – tens of novel AI-enabled diagnostics were CE-marked in 2023-2025, many of which will now need to comply with the Act’s enhanced rules.)

In summary, the current state is one of active adjustment. The compliance deadline looms, so companies are conducting gap analyses and early pilot audits. No major enforcement actions (fines or bans) have been reported yet, but regulators have made clear they will inspect AI governance. Competent Authorities in member states are setting up helpdesks (AI Act Service Desk) and national AI Supervisory Authorities are being designated. Companies that waited until mid-2026 to begin compliance likely find themselves behind schedule.

7. Implications and Future Outlook

7.1 Implications for Innovation and Healthcare

The AI Act’s strict regime has sparked debate over its impact on life sciences innovation. Critics warn of compliance costs and slower product cycles; supporters argue it will raise standards and consumer trust.

  • Innovation Hurdles: Especially for small biotech and medtech innovators, meeting the technical documentation and QMS requirements is burdensome ([75]). There are concerns it may deter some startups (e.g. an EU medical AI startup recently postponed a market launch to allocate resources for AI Act compliance). The pharmaceutical industry has voiced similar worries: EFPIA explicitly cautions that mis-specified high-risk rules could “discourage innovation by imposing burdensome requirements” ([76]). Some drug companies are lobbying for exemptions or phased approaches (e.g. applying the regulation only to final marketable AI “products,” not to R&D tools) ([76]) ([25]).

  • Economic and Safety Benefits: On the other hand, proponents highlight potential positive effects. A harmonized EU framework may create a predictable market for AI-enhanced medical products, encouraging investment in safe AI. Companies may benefit from a clear legal path for new AI-driven diagnostics, rather than a patchwork of national rules. The emphasis on human rights and transparency aims to avoid public backlash from AI errors (e.g. misdiagnosis by a black-box algorithm), preserving patient safety. In theory, higher trust could speed adoption of beneficial technologies (e.g. AI in radiology or AI-driven drug repurposing tools) in the long run.

  • Global Leadership: Some analysts view the AI Act as positioning Europe as a global leader in “safe healthcare AI.” For example, Navarro argues that Europe’s stringent regulation is ultimately an advantage for EU-based AI health innovators, as it sets a gold standard for safety and credibility ([27]). Indeed, a cohesive regulatory regime across 27 countries can make the EU an attractive market for high-quality AI solutions (similar to how the GDPR’s unified data protection rules boosted Europe’s digital cohesion).

7.2 Long-Term Perspectives

  • Regulatory Evolution: The AI Act is designed to evolve. The European Artificial Intelligence Board (EAIB) under the Act is tasked with yearly review and recommending changes. Amending the law (via delegated acts) is possible; indeed, the recent Digital Omnibus proposal extended deadlines and relaxed some provisions ([55]). In life sciences, we expect further harmonization. For instance, future modifications may explicitly incorporate other healthcare regulations (clinical trials, Good Manufacturing Practice for AI-based manufacturing, etc.).

  • Standards and Technology: Standardization will play a big role. Ongoing ISO/IEC JTC1 initiatives (e.g. AI management system standard ISO 42001) and CEN/CENELEC standards for AI in medical devices will provide technical guidance that companies can follow for “presumption of conformity” (Article 42). On the technology side, advances in explainable AI (XAI) and automated compliance tools are likely. AI developers are already building features to produce audit trails or “AI control rooms” for monitoring deployed models.

  • Marketplace Shifts: In the medium term, early adopters of AI Act compliance may gain market share by marketing their AI solutions as “certified AI” under EU law. Pharma companies may start listing AI governance maturity as a competitive factor in B2B deals, mergers, and acquisitions (due diligence increasingly checks GDPR/AI compliance ([77])). Big data partnerships (e.g. sharing clinical trial and real-world data) will likely formalize with robust legal contracts reflecting AI Act obligations ([77]).

  • Skills and Workforce: Compliance needs will drive new roles. We foresee growth in “AI Quality Assurance managers,” “AI ethics officers”, and other hybrid governance positions within pharma and medtech firms. Specialist consultancies in AI regulatory affairs (similar to existing GMP/QMS consulting) are emerging. Educational programs at universities and professional schools are updating curricula to include AI regulation in healthcare.

8. Conclusion

The EU AI Act represents a profound shift for pharmaceutical and medical device sectors. By classifying AI systems by risk and imposing concrete obligations, it forces all stakeholders to confront AI’s challenges head-on. For medtech manufacturers, the path is clear – their AI-enabled products are high-risk and must comply by (originally) August 2026 (extended to 2027/28). This means integrating AI checks into existing medical device processes and working closely with Notified Bodies. For pharma companies, much depends on whether their AI falls under existing regulated product definitions. Tools used internally for research may be largely outside the Act’s scope, but any AI that crosses into clinical territory will be regulated.

While this transition is demanding, it also promises benefits: safer, more trustworthy AI in healthcare, and legal clarity that can boost long-term innovation. The extensive compliance requirements (risk management, data governance, transparency, QMS, etc.) will become new industry norms. Companies should not wait – the clock is ticking toward the first major deadline. By August 2026 (and certainly by 2027 for medical devices), EU authorities will expect all high-risk AI systems on the market to meet the Act’s standards. The playbook above has outlined the steps companies must take now to get ready: classify their AI, update processes, fill documentation, and engage with regulators.

In conclusion, the AI Act imposes new “ground rules” on the AI revolution in life sciences. Those who adapt early – embedding robust data processes, inclusive design, and rigorous testing – will not only avoid penalties but may find themselves in a leadership position when Europe’s AI-enhanced healthcare system fully materializes.

Sources: Official EU documents and expert analyses (European Commission press releases ([10]), EU AI Act text ([37])); regulatory guidance (MDCG FAQ ([2]) ([9])); industry and legal commentary ([1]) ([31]) ([3]) ([19]); and sector publications (RAPS, MedDeviceGuide, EFPIA, European Pharma Review) ([45]) ([17]). All factual claims above are backed by these sources.

External Sources (77)
Adrien Laurent

Need Expert Guidance on This Topic?

Let's discuss how IntuitionLabs can help you navigate the challenges covered in this article.

I'm Adrien Laurent, Founder & CEO of IntuitionLabs. With 25+ years of experience in enterprise software development, I specialize in creating custom AI solutions for the pharmaceutical and life science industries.

DISCLAIMER

The information contained in this document is provided for educational and informational purposes only. We make no representations or warranties of any kind, express or implied, about the completeness, accuracy, reliability, suitability, or availability of the information contained herein. Any reliance you place on such information is strictly at your own risk. In no event will IntuitionLabs.ai or its representatives be liable for any loss or damage including without limitation, indirect or consequential loss or damage, or any loss or damage whatsoever arising from the use of information presented in this document. This document may contain content generated with the assistance of artificial intelligence technologies. AI-generated content may contain errors, omissions, or inaccuracies. Readers are advised to independently verify any critical information before acting upon it. All product names, logos, brands, trademarks, and registered trademarks mentioned in this document are the property of their respective owners. All company, product, and service names used in this document are for identification purposes only. Use of these names, logos, trademarks, and brands does not imply endorsement by the respective trademark holders. IntuitionLabs.ai is an AI software development company specializing in helping life-science companies implement and leverage artificial intelligence solutions. Founded in 2023 by Adrien Laurent and based in San Jose, California. This document does not constitute professional or legal advice. For specific guidance related to your business needs, please consult with appropriate qualified professionals.

Related Articles

Need help with AI?

© 2026 IntuitionLabs. All rights reserved.