IntuitionLabs
Back to ArticlesBy Adrien Laurent

EU Digital Omnibus: Pharma & Medical Device AI Compliance

Executive Summary

The EU Digital Omnibus package – unveiled in late 2025 – is a coordinated legislative initiative to simplify and streamline the EU’s digital rulebook. It comprises two intertwined proposals: a Digital Omnibus on AI (amending the EU AI Act of 2024) and a broader Digital Omnibus Regulation (amending assorted data, cybersecurity and electronic communications laws, including the GDPR, Data Act, ePrivacy Directive, NIS2, etc.) ([1]) ([2]). These targeted amendments are explicitly designed to reduce the regulatory burden on businesses while preserving high standards of safety, data protection and consumer trust ([3]) ([4]).

For the life sciences sector (pharmaceuticals, biotech and medical devices), key changes include:

  • Extended Compliance Timelines for AI: High-risk AI systems (notably those integrated in medical devices or used in diagnostic/clinical contexts) would benefit from a conditional, staggered timetable. Instead of fixed August 2026/2027 start dates, obligations would only kick in six or twelve months after the EU confirms that harmonised standards, common specifications or guidance are available, subject to “long-stop” deadlines of 2 December 2027 (for Annex III use-cases) and 2 August 2028 (for Annex I systems, including AI in medical devices) ([5]) ([6]). In practice this grants life sciences innovators significantly more time to prepare for full AI Act compliance ([5]) ([7]) (for example, an AI-based diagnostic device originally due in August 2026 would now have up to mid-2028, depending on conformity-support measures).

  • Integrated Conformity Assessment: The AI Omnibus explicitly confirms that for high-risk AI systems that are also regulated products (e.g. “ AI as a Medical Device”), the EU AI Act’s requirements will be fulfilled through the existing MDR/IVDR conformity assessment regime. In other words, manufacturers can use a single certification/QMS process to address both device and AI obligations ([8]) ([9]). Likewise, the procedure for designating notified bodies is streamlined: a single application/assessment will suffice for both MDR/IVDR and AI Act approval. This avoids duplication of testing and accelerates the availability of AI-competent notified bodies ([10]) ([11]).

  • Real-World Testing and Sandboxes: The AI Omnibus expands the EU’s regulatory sandboxes and pre-market testing regime. High-risk AI devices (Annex I) can now be tested in ‘real-world conditions’ before formal market placement ([12]) ([13]) – a boon for companies piloting clinical AI systems. The Commission will also establish an EU-level AI sandbox and require cross-border collaboration among national sandboxes, allowing pharma and medtech firms to refine AI products under regulatory supervision ([12]) ([13]).

  • Research & Bias Detox: Pharmaceutical and biotech firms benefit from clearer research exemptions. EU guidance (as pledged in the Omnibus Staff Working Document) will clarify that pre-clinical research and performance studies do not trigger full AI Act obligations ([14]) ([15]). Crucially, the AI Act is amended to permit limited processing of health or other special-category personal data solely for bias detection and correction in high-risk AI ([16]). This new Article 4a enables, for instance, a pharma company to use patient health data to audit and improve an AI diagnostic model’s equity, provided stringent safeguards are met ([16]).

  • Administrative Relief: Many onerous provisions are relaxed or recalibrated. The blanket obligation for “ AI literacy” training becomes a policy encouragement rather than a binding duty ([17]) ([18]). The requirement to register AI systems not deemed high-risk is removed ([19]). SME-friendly measures (simpler documentation, penalty discounting) are extended to “small mid-cap” companies, recognizing that life sciences scale-ups often exceed the SME threshold ([20]) ([21]). Similarly, data protection burdens are eased: e.g. higher shock thresholds for breach notifications, unified DPIA lists, and exemptions from some transparency notices are proposed ([22]) ([23]).

  • Data Governance Adjustments: The broader Omnibus Regulation proposes targeted updates to GDPR and related laws.Among these, pseudonymised/anonymised research data for which re‐identification is not reasonably likely would not count as personal data ([24]) ([4]) (intended to free research datasets from GDPR rules). Scientific research is more clearly defined as a legitimate purpose (including commercial research) and can rely on “legitimate interest” as lawful basis ([25]). Special category data (health data) may incidentally be processed for AI model development with safeguards ([25]). Overall, these innovations should broaden pharma’s ability to use health data in AI R&D and post-market surveillance, albeit subject to the EDPB/EDPS insistence on preserving fundamental privacy protections ([25]) ([4]).

In sum, the Digital Omnibus package seeks to reconcile Europe’s high standards with the urgent need for agility. If enacted, it would save EU businesses an estimated €5 billion in compliance costs by 2029 ([3]) ([26]). For pharma and medtech companies, the package offers longer lead-times, clearer rules and reduced paperwork on AI and data, which is likely to accelerate innovation – provided companies also maintain robust safety and data safeguards. Regulatory authorities (Commission, EDPB/EDPS) have signalled support for simplification in principle, but continue to underscore that no cornerstones (e.g. personal data protections) should be weakened ([4]) ([27]). In practice, life sciences stakeholders must monitor the Omnibus’ progress and prepare to adapt their compliance strategies accordingly.

Introduction

The EU’s digital regulatory framework has grown extensive and complex. Over the past decade, Europe has enacted pioneering laws in data protection (the GDPR), cybersecurity (NIS-Directive/NIS2), artificial intelligence (the 2024 AI Act), medical devices (MDR/IVDR), digital products (Cybersecurity Act, Data Act, etc.), and more. While these rules aim to ensure safety, privacy and trust, stakeholders have increasingly argued that cumulative burdens now hamper competitiveness. In calls dating from 2024-2025, EU leaders and industry reports warned that regulatory complexity risked slowing innovation across sectors, including life sciences ([28]) ([2]).

Against this backdrop, the Commission launched a Digital Simplification agenda under the mantra “A simpler and faster Europe” ([28]). The centerpiece is the Digital Omnibus – a set of narrow, technical amendments intended to give immediate relief. Announced on 19 November 2025, the package has two pillars ([1]): (1) Digital Omnibus on AI (amend the EU AI Act and related legislation) and (2) Digital Omnibus Regulation (amend a sweep of existing digital laws, e.g. GDPR, Data Act, ePrivacy, NIS2). Together these proposals aim to lower compliance costs while “promoting Europe’s highest standards” of rights and safety ([3]).

For pharmaceutical and medical device companies, the timing is salient. Many in the life sciences industry are embracing AI: from biotech firms applying machine learning to drug discovery, to medtech companies embedding AI in diagnostics and implants. According to industry surveys and analyses, AI adoption in European healthcare is already pervasive. One report noted roughly 7,754 healthcare companies in Europe leverage AI in some capacity, with thousands of new startups dedicated to health AI ([29]). These firms interact with both digital and sectoral law – for example, an AI-driven diagnostic tool must satisfy the new AI Act and the Medical Devices Regulation (MDR). Likewise, pharma firms using AI for clinical trial analysis must comply with GDPR for patient data.

Thus the Digital Omnibus affects life sciences on two fronts: the horizontal AI/data/cyber laws and the sectoral specific rules (MDR/IVDR/Medicinal Products Regulation, etc.). In particular:

  • The AI Act (Regulation 2024/1689) classifies AI in medical devices and IVDs as “high-risk” under Annex I ([30]) ([31]). As a result, any such AI systems would ordinarily face rigorous requirements (risk management, documentation, etc.) beginning August 2026. Critics warned that the practical rollout (e.g. standards, notified bodies) was not ready in time. The Digital Omnibus on AI directly addresses these challenges.

  • GDPR and Data Rules: Pharma and medtech handle vast volumes of personal (often health) data – in R&D, trials, pharmacovigilance, patient apps, etc. Proposed GDPR clarifications on pseudonymization, research use, and breach reporting key to life sciences are part of the Digital Omnibus Regulation.

  • Cyber & ePrivacy: Medical institutions and device manufacturers are also subject to new cybersecurity (NIS2) and privacy (ePrivacy) rules. The omnibus proposes modest tweaks here (e.g. aligning breach timelines under GDPR/NIS2, easing cookie consent burdens) that can have downstream impact on health tech compliance.

In this report, we systematically dissect the Digital Omnibus provisions and their implications for pharmaceutical and medical device compliance. We combine legislative analysis, industry commentary and academic insights to detail what is changing and why it matters. We begin with the AI-related amendments, then turn to data protection and other horizontal laws, and finally illustrate with examples. We draw on official sources (Commission communications, regulatory opinions), law firm analyses, and independent studies to ensure each claim is evidence-backed ([3]) ([1]) ([9]) ([4]). Our intent is to provide life sciences stakeholders – from regulators to CEOs – with an authoritative, granular resource on the Digital Omnibus.

The EU Digital Omnibus Package: Scope and Context

In November 2025 the European Commission unveiled a broader digital package with multiple components ([3]) ([2]). Central to this was the Digital Omnibus: an urgent “set of technical amendments to a large corpus of digital legislation” designed to streamline rules on AI, cybersecurity and data ([32]) ([33]). The stated goal is that “compliance with the rules comes at a lower cost, delivers on the same objectives, and brings in itself a competitive advantage to responsible businesses” ([34]). The Commission estimated these changes could save EU businesses at least €5 billion by 2029 ([3]) ([26]). (Separate elements – not the focus here – include a Data Union Strategy and a proposed European Business Wallet for digital identities ([32]).)

The Omnibus comprises two legislative proposals issued 19 Nov 2025:

  1. Digital Omnibus on AI Regulation (AI Act amendments): A Regulation amending the newly adopted EU AI Act (Reg.2024/1689) to ease implementation challenges. It also amends various related frameworks (notably it adds a small amendment to the EU Civil Aviation safety rules (Reg 2018/1139) to align on AI, as well as extending obligations in the Cybersecurity Act’s AI building blocks). This is sometimes called the “AI Omnibus”.

  2. Digital Omnibus Regulation Proposal: A Regulation amending a broad array of acts in the “digital acquis”: data laws (GDPR 2016/679, Data Act 2023/2854, Free Flow of Non-Personal Data Reg 2018/1807, Data Governance Act 2022/868, Open Data Directive 2019/1024), privacy and electronic communications (ePrivacy Directive 2002/58), cybersecurity (NIS2 Dir 2022/2555), platform regulation (repealing Reg 2019/1150), and more ([35]) ([36]). In effect, it “integrates” DGA/ODD into the Data Act and sweeps away duplicative or outdated requirements.

These proposals followed extensive stakeholder input. Throughout 2025 the Commission held consultations and a public Call for Evidence on digital simplification, receiving input from industries (incl. MedTech Europe, pharma groups like EFPIA), academia and national authorities. Common feedback was that overlapping rules and deferred standards threatened market readiness. The Commission’s Staff Working Document recognizes this: “implementation and harmonised standards for high-risk AI have fallen behind schedule… driving compliance burdens above the level originally envisaged” ([37]). Similarly, the cost of navigating GDPR uncertainties (e.g. for pseudonymized clinical data) was flagged. The Omnibus thus is explicitly a “first step” toward an “agile EU digital rulebook” ([38]) ([39]).

The legislative process is underway. After initial publication, Parliament and Council reflections began (including EDPB/EDPS joint opinions issued Feb 2026 ([4])). As of April 2026, trilogue negotiations loom with an aim to adopt the Omnibus reforms by late 2026. Lifesciences companies should therefore prepare for these changes possibly entering force as early as 2027, while still contributing feedback during the extended consultation period.

Table 1 (below) summarizes the key AI-specific amendments proposed (further details follow). A broader summary of the data/cyber amendments is then provided.

System/ObligationCurrent (AI Act)Proposed (Digital Omnibus)Implication
High-risk AI obligations (Chapter III)Apply from 2 Aug 2026 (Annex III uses) or 2 Aug 2027 (Annex I) ([6])Apply after Commission confirms standards/guidance in place (6 mo post-confirmation for Annex III, 12 mo for Annex I), but no later than 2 Dec 2027 (Annex III) / 2 Aug 2028 (Annex I) ([5]) ([7])Life sciences get a longer runway to comply; obligations phased in once compliance tools exist.
Legacy AI systems (design unchanged)Must comply by applicable deadline if placed on market after cut-offUnits of same type/model lawfully placed pre-deadline can continue market placement without new conformity, if unchanged design ([40])Protects existing products; companies need only re-certify on significant redesign.
Conformity assessment (AI+sector laws)Separate AI assessment (in theory) and device (MDR/IVDR)Integrated assessment: MDR/IVDR conformity procedure covers AI requirements ([8]) ([9])Single combined QMS and audit covers both; reduces duplication and speeds compliance.
Notified body designationSeparate applications for AI Act vs MDR/IVDROne unified application/assessment possible for both frameworks ([10]) ([11])More notified bodies can qualify for AI-device testing; blunts the NB shortage.
AI literacy training requirement (Art 4)Binding obligation on providers/deployers to ensure staff AI literacyShifted to non-binding “encouragement” by Commission/Member States ([17]) ([18])Less legal risk for companies; still recommended to train staff but without sanctions for non-compliance.
Registration of non-high-risk AI (Art 60)Voluntary for most (required for certain use cases under Annex III)Remove obligation to register systems that are not high-risk and only perform narrow tasks ([41])Simplifies compliance for “low-impact” AI used in pharma/medtech (e.g. administrative tools).
AI bias detection (special data, new Art 4a)No explicit AI Act provisionPermit processing of health-related (special category) data solely for bias detection/correction, under strict safeguards ([16])Provides legal basis for pharma/health data to audit/mitigate biases in AI models, aligning with EFPIA recommendations ([14]).

Table 1: Key proposed amendments to the AI Act (via the Digital Omnibus on AI) and their relevance to life sciences. “MDR/IVDR” = Medical Device/IVD Regulations.

AI Act Amendments: MedTech and Pharma Implications

The AI Omnibus proposal [COM(2025)836 final] primarily tweaks the EU AI Act 2024. The principal objective was to ease implementation of the Act’s Chapter III requirements for high-risk AI, which were scheduled to accelerate from August 2026 onward ([6]) ([5]). Below we explore the main changes and their impact on medical devices and pharmaceutical use-cases.

Extended Timelines for High-Risk AI

Original AI Act: High-risk AI systems had phased dates. Annex III use-cases (e.g. remote biometric ID, certain critical classification) would be subject to obligations from 2 August 2026; AI systems that are regulated products (Annex I – including AI as a medical device or safety component in an MD/IVD) were due from 2 August 2027 ([6]). This tight timeline assumed the Commission would publish all needed standards and guidelines by summer 2026.

Digital Omnibus (AI) Proposal: The Commission fundamentally shifts to a conditional timing. Chapter III high-risk obligations now start only after support measures are in place, not at hard dates ([6]) ([42]). Concretely, once the Commission (through an implementing act) confirms that relevant harmonised standards, common specifications or guidelines are available, the following apply:

  • Annex III systems: obligations commence 6 months later, with a “long-stop” date of 2 December 2027 ([7]).
  • Annex I systems (including AI in medical devices/IVDs): obligations commence 12 months later, with a long-stop of 2 August 2028 ([5]) ([7]).

These staggered timelines give life science companies a more extended compliance runway. As Sidley Bloomberg explains, “the MedTech industry would receive a longer compliance runway than originally anticipated. The underlying obligations remain unchanged, but their application will be phased in when supporting measures become available” ([7]). In practical terms, if harmonised standards for AI in devices (ISO, IEC etc.) lag behind, device makers may effectively not face mandatory AI Act requirements until mid-2028. This acknowledges real-world delays: as one analysis notes, “work on harmonised standards has fallen behind schedule” ([37]).

Legacy Systems: The Omnibus also clarifies treatment of existing high-risk devices. If at least one unit of a device was lawfully on the market before the cut-off date (e.g. 2 Aug 2026 for Annex I devices), additional units of the same type/model may continue to be placed on the EU market without re-doing conformity assessments, provided the design is unchanged ([40]). Only significant design modifications would trigger full new compliance. This “grandfathering” clause softens the transition by recognizing already-certified products.

Implications

  • More Time to Prepare: Medtech firms can delay some investment in AI compliance until the legal benchmarks (standards, notifier lists) arrive. Many startups and SMEs have welcomed this breathing room ([5]) ([7]). They can stagger training, documentation updates and pilot testing to align with actual implementation cycles, instead of rigid calendar deadlines.
  • Continued Uncertainty on Timing: Because implementation hinges on Commission finding council of standards, companies must still monitor progress closely. Legally, obligations “start” upon the Commission’s decision, not automatically. Arnorld & Porter cautions that this flexibility is “time-limited” and may be narrowed in trilogue ([43]). Until final deadlines arrive, firms should still prepare for eventual full compliance.

Conformity Assessment and Notified Bodies

Under the current AI Act, high-risk AI systems (including those embedded in regulated products) require ex-ante conformity assessments. But because sectors like medical devices already have their own conformity regimes (under the MDR/IVDR), there was ambiguity on whether a manufacturer would need two separate certifications.

Omnibus Clarification: The proposal explicitly confirms that sectoral conformity takes precedence. If an AI system is a medical device or part of one, then the MDR/IVDR conformity procedure suffices to cover both the device and any embedded AI high-risk requirements ([8]) ([9]). Put differently, a manufacturer can incorporate the AI Act’s substantive obligations (data governance, risk management, documentation, transparency, human oversight, robustness etc.) into the existing technical documentation required by the MDR/IVDR, rather than undergoing a duplicate “AI-only” audit ([8]). The same goes for quality management: a single QMS (as required under MDR/IVDR) may be extended to meet the AI Act’s requirements ([44]).

This harmonization was a key industry ask. Sidley notes: “the AI Omnibus confirms that the AI Act’s substantive high-risk requirements which are not covered by the MDR and IVDR … do not have to be addressed through a standalone AI-only assessment” ([45]). Instead, notified bodies designated under MDR/IVDR will assess the AI parts in the course of the device certification. This saves substantial time and effort. For example, a manufacturer of an AI imaging software (a class IIb device) will not need to hire two different notified bodies for separate audits – one for MDR and one for the AI Act.

Streamlined Notified Body Process: The Omnibus also streamlines the designation of conformity assessment bodies. It allows a combined application and single assessment procedure for bodies seeking to be designated under both the AI Act and the relevant product legislation ([10]) ([11]). Moreover, existing MDR/IVDR notified bodies have transitional relief: they have 18 months after the Omnibus enters into force to apply for AI Act designation ([10]). These changes should help alleviate the current shortage of EU NBs with AI expertise when the original deadlines drew near.

Implications:

  • Reduced Duplication: MedTech companies benefit from one-stop conformity. Instead of separate documentation and audits, only one comprehensive procedure is required. Medical device manufacturers can integrate AI asset management into their ongoing quality and certification processes ([8]) ([9]).
  • Faster Market Access: A single NB audit means fewer delays. An Arnold & Porter advisory highlights that “manufacturers of AI-enabled medical devices … should [be able to] reduce duplication” and that this will “help accelerate the availability of suitably designated notified bodies” ([46]).
  • Continued Complexity: Companies must still ensure they meet all AI Act obligations. The integration clarifications do not reduce substance; AI criteria (e.g. human oversight, data quality) must still be satisfied. It simply means compliance tasks can be bundled together. Careful project planning is needed to incorporate AI risk management into the standard conformity workflow.

Real-World Testing and Regulatory Sandboxes

The AI Act included provisions (§60) for “free testing” of high-risk AI in real environments, but only for Annex III use-cases (biometric systems, critical classification within designated domains) under strict conditions. The Omnibus extends real-world testing to Annex I AI systems (including medical devices) and strengthens sandboxes:

  • Real-World Testing: Providers of high-risk systems under Annex I can now test their AI in real operational conditions at any time before market placement ([12]) ([13]). This means, for example, a company developing an AI-powered MRI analyzer can trial it in a hospital setting to validate performance without formally “placing on market.” Earlier, such pre-market validation would risk triggering full AI Act compliance; now it’s expressly permitted.

  • AI Regulatory Sandboxes: The proposal tasks the EU’s AI Office with establishing an EU-level sandbox by 2028, supplementing the national sandboxes already being set up. It also calls for better coordination and information-sharing among Member States’ sandboxes ([12]). For multi-country or cross-border pharma/medtech products, this creates a unified framework for regulated pilot projects.

Implications: These provisions promote innovation. Companies can safely iterate high-risk life science AI systems with real users and data before full clearance. As one analysis puts it, expanded testing opportunities “strengthen regulatory sandbox” for life sciences firms ([12]). However, access will require robust data protection: even within sandboxes or real-world tests, GDPR and medical confidentiality rules still apply. Firms should use these mechanisms to refine AI models (checking for bias, reliability, integration issues) but must continue liaising with ethics boards and DPAs.

AI Literacy and Other Administrative Changes

Several smaller but notable elements of the Omnibus simplify compliance:

  • AI Literacy (Art.4): The original AI Act imposed a broad obligation on all AI system providers/deployers to ensure “sufficient AI literacy” of their personnel. Stakeholders found this too vague and burdensome. The Omnibus recasts this as a collective encouragement: Member States and the Commission “shall encourage” providers/deployers to improve AI literacy, but it is no longer a strict requirement ([17]) ([18]). In essence, companies are still expected to train staff on AI ethics and risk, but failure to do so is no longer directly sanctionable under the AI Act.

  • SME/SMC Relief: Innovators note that many scaling medtech/biotech firms have “outgrown” SME status but still lack huge compliance budgets. The Omnibus extends several SME-specific simplifications to “small mid-caps” (SMCs), defined as firms with fewer than 750 employees ([20]) ([21]). These include lighter technical documentation and reduced fines. Across the board, the Omnibus seeks to proportion regulatory burden to company size – which should assist rapidly growing pharma biotech startups during their transition beyond SME thresholds.

  • Registration of Non-High-Risk AI: Under the current AI Act, certain AI systems (even if ultimately low-risk) still needed voluntary or mandatory listing. The proposal removes the obligation to register non-high-risk AI systems that serve only narrow, ancillary tasks (e.g. administrative chatbots, logistic planning tools) ([19]). This means, for routine uses of AI within pharma/medtech companies that do not materially affect patients or safety, the compliance paperwork is reduced.

Overall, these measures lighten day-to-day regulatory burden. An Osborne Clarke commentary notes that removing the literacy and registration rules “responds to stakeholder feedback” and “should benefit companies deploying AI for ancillary functions” ([19]). In practice, a small medtech firm can now allocate fewer resources to internal AI governance for its non-core systems, allowing focus on critical risk areas.

Bias Detection and Use of Health Data

One innovative aspect of the AI Omnibus is its treatment of special categories of personal data (especially health data). The original AI Act was silent on whether using health data to audit AI systems was permissible. Recognizing pharma’s reliance on such data, the Omnibus introduces Article 4a: it permits the processing of health and other sensitive personal data solely for bias detection and mitigation in high-risk AI systems ([16]).

To paraphrase, a life sciences company can now explicitly justify using patient data to check if its AI is unfairly discriminating (e.g. underrepresenting a demographic) and to retrain it, provided it meets strict requirements (demonstrating necessity, robust security, data deletion post-process) ([16]). This was a direct response to calls from industry groups: for instance, EFPIA had urged the Commission to clarify AI use in drug R&D, including bias management. The Sidley analysis notes that clearer legal basis for “limited bias-related processing of special categories” will be valuable, though it will likely be “narrowly interpreted” by authorities ([47]).

Implications: This change empowers life sciences companies to tackle AI bias more effectively. For example, a pharma using an AI algorithm to predict clinical trial outcomes could legally scan de-identified patient records to ensure the model does not systematically disadvantage certain populations – a task that previously lacked a clear lawful basis under human rights laws. However, the EDPB/EDPS stress that such derogations must remain proportionate ([48]). Companies must still implement rigorous governance and document why lower-risk alternatives (e.g. synthetic data) were insufficient. Nonetheless, the explicit provision should give data scientists more legal certainty when improving AI fairness.

GDPR and Data Law Changes

Alongside the AI Act amendments, the Digital Omnibus Regulation proposes targeted tweaks to data protection and digital laws, many with direct relevance to health data and AI development in life sciences. We highlight the principal ones:

Redefining Personal Data (Pseudonymisation)

A headline proposal is a narrowing of “personal data” in the GDPR. The Commission would introduce a provision clarifying that information is not personal data if the controller does not have means “reasonably likely to be used” to re-identify the individual ([49]). In plain terms, pseudonymized clinical trial data that a research lab receives, but cannot re-link to patient identities, would not count as personal data (absent re-identification capability) ([49]). This builds on emerging case law (CJEU SRB decision) which hinted that pseudonymized data could fall outside GDPR in some contexts.

If adopted, this would ease GDPR compliance for data-driven research. For example, a pharma company hosting a database of de-identified patient scans could arguably treat it as non-personal data, simplifying analysis and transfer. A Sidley blog notes this “may reduce compliance burdens in low-risk scenarios” ([49]). However, the European Data Protection Board and Supervisor oppose this change strongly ([4]). They warn it could dangerously narrow privacy protections and create legal uncertainty about what counts as personal data.

Implication: In 2026 negotiations, companies should watch this closely. If the change survives, life sciences can be more confident when collaborating on truly anonymized research. But given EDPB/EDPS scepticism, organizations cannot bank on a broad pseudonymisation exemption unless co-legislators limit it carefully. In the meantime, firms should continue applying GDPR to health data processing unless and until “identifiability” is conclusively removed in law.

Scientific Research Exemptions

The GDPR contains Article 89 on research, but its scope has been deemed unclear. The Omnibus proposal aims to harmonize and expand the research carve-out:

  • Definition of Research: It would specify that “scientific research” includes essentially all academic and non-academic research, including projects funded by industry ([25]). This explicitly counters Member State ambiguities.
  • Legal Basis: It would clarify that processing personal data (including sensitive data) for developing or operating AI systems can be based on legitimate interest (Art6(1)(f)), subject to safeguards ([25]). Current GDPR does not explicitly list AI development as a legitimate interest purpose.
  • Biased Data: The proposal permits incidental or residual processing of health data for AI development and bias correction (with conditions) ([25]), aligning with the AI Act Article 4a.
  • Transparency Exemptions: The Omnibus would create an exemption from some information duties when data is collected directly for research and providing notice is impossible or disproportionately difficult ([50]). For example, long-term biobank research or retrospective analyses could avoid contacting thousands of subjects.

Implications: These refinements make it easier for pharma and medtech to use health data for R&D. If legitimate interest can be used, companies can sidestep the onerous consent regime for internal data analysis (provided DPIAs, safeguards, and an opposition right are in place). Harmonising the research definition means clinical trials at universities or companies would uniformly benefit from GDPR flexibilities across the EU ([25]). On the other hand, the EDPB/EDPS cautiously requested clarity on these exemptions ([51]). Life sciences entities should therefore implement robust ethical oversight (e.g. patient advisory boards) to justify reliance on these grounds.

Data Protection Impact Assessments and Breach Reporting

The Omnibus includes practical streamlining:

  • Uniform DPIA Lists: The proposal tasks the EDPB with creating EU-wide white/black lists of processing operations that require Data Protection Impact Assessments ([50]). For multinational pharma conducting similar AI projects in different countries, this could prevent conflicting national DPIA decisions (a common headache). The Omnibus would also allow these lists to be updated by the EU regulators to reflect new technology, ensuring consistency.
  • Harmonized Breach Notification: Article 33’s deadlines would be aligned across GDPR and NIS2. Specifically, firms would have 72 hours to notify, but could extend that if approved by the lead authority ([22]). More importantly, the threshold for mandatory notification would be raised (so minor incidents need not be reported). For large pharma/MedTech with prompt internal detection, this means GDPR/NIS2 would present a unified window and report format, reducing the need for separate filings.

Implications: These procedural changes cut red tape. Big pharmaceutical firms often handle dozens of cross-border data projects; a unified DPIA approach means one review instead of 27 national ones. Likewise, aligning breach protocols prevents multiple filings for the same incident. However, companies must still conduct strong risk assessments internally – the reforms simply ease regulatory paperwork.

ePrivacy and Cookies

Though not pharma-specific, the Omnibus also proposes ePrivacy tweaks:

  • Consent Fatigue: To combat endless cookie pop-ups, the directive would push for one-click consent and machine-readable preference signals ([52]). Websites (including health portals or medtech info sites) could rely on browser privacy dashboards instead of custom banners. The EDPB/EDPS welcome this “machine-readable choices” approach ([53]), expecting it to simplify compliance for online services (pharma marketing sites, patient apps etc.).

  • Biometric Authentication: A new derogation allows processing biometric data solely for identity verification if the means remain under the user’s sole control ([54]). This could facilitate fingerprint or face-ID login to patient apps or clinical portals. The watchdogs support this limited exception, noting its potential utility in secure health platforms.

Pharma/MedTech IT teams should thus anticipate less friction for standard web services, but must remain vigilant about broader ePrivacy changes. Notably, the Omnibus also inserts ePrivacy rules into GDPR (Art88a on consent in terminal equipment) ([55]), which may affect IoT healthcare devices. Stakeholders should watch the interplay: splitting consent obligations between GDPR and ePrivacy might create gaps ([56]) as the EDPB warns.

Other Data Laws (Data Act, DGA, etc.)

The omnibus also streamlines the wider “data acquis”:

  • Data Governance Act (DGA) and Data Act: The proposal repeals overlapping instruments (including the DGA and Open Data Directive), folding them into an updated Data Act ([57]). While largely outside pharma’s core business, this could affect how companies share industry data and use public research data. Importantly, proposed modifications limit B2G data access (>public emergencies only), relieving companies from broad mandates to hand over their data to governments except in crises.

  • Altruistic Data: The Omnibus maintains incentives for data altruism organizations but underlines strong governance. Pharma collaborations like health data trusts would fall under these rules. Any expansion of data intermediaries will require careful due diligence to ensure patient trust.

Pharma companies should monitor Data Act changes insofar as they engage in data-sharing consortia, or use public datasets for AI training. Ensuring compatibility with GDPR (a priority of the EDPB/EDPS ([58])) remains crucial: even if the Data Act grants use of certain data, GDPR legal bases still apply to personal data.

Industry Perspectives and Case Studies

To gauge the practical impact of these proposals, it helps to consider concrete examples:

  • AI in Pharma R&D (Case: Owkin): Owkin (France) is a prominent AI-driven biotech using machine learning to identify novel drug targets and optimize clinical trials. It has raised ~$300 million and partners with top pharma companies ([59]). Owkin’s AI models rely heavily on patient genomic and clinical data. Under the Omnibus proposals, Owkin could process such health data for model bias testing (new Article 4a) and would benefit from the clarified research basis in GDPR ([16]) ([25]). The extended timelines mean any high-risk AI components of Owkin’s platform wouldn’t face stringent audits until late 2028 at the earliest. Additionally, if Owkin’s services are used in multiple Member States, the harmonized research exemptions and DPIA templates would ease cross-border operations. However, Owkin must still ensure it adheres to core data protection principles. The regulatory updates are likely net positive for such firms, as they explicitly address known pain points like the need to debug AI fairness.

  • AI-enabled Medical Device (Case: Wandercraft exoskeleton): Wandercraft (France) makes an AI-controlled exoskeleton that helps paralyzed patients walk. The device, with its AI balance algorithm, is classed as a high-risk medical device subject to MDR and (since mid-2024) the AI Act. Its developers had been concerned about the converging compliance burdens. Under the Omnibus, Wandercraft can plan a single conformity assessment for the hardware+AI, and delay full AI Act certification beyond the original 2026 date (potentially to 2028). They can also use the real-world testing provisions to trial firmware updates in patient rehab centers without formal re-certification (so long as incidents are tracked). For example, if Wandercraft wants to deploy an AI stability update, it could test patients in a controlled lab (ensuring data protection measures) before true market release, streamlining innovation. On the flip side, upcoming changes mean Wandercraft’s notification obligations (in case of cyber incident affecting the device) might be reported under the aligned GDPR/NIS2 regime, giving incident response teams more time. Overall, the simplifications should lower Wandercraft’s regulatory overhead without compromising patient safety.

  • Clinical AI Service (Hypothetical): Consider a startup offering AI-based analysis of radiology images for cancer detection, licensed to hospitals. This system qualifies as a medical device software and its use of patient images is high-risk. Under the Digital Omnibus:

  • It can register and validate its AI using the expanded testing framework, obtaining clinician feedback in situ before official market placement ([12]).

  • When processing hospital imaging data, the clarified GDPR research provisions mean it may argue legitimate interest (or a research basis) for algorithm training, easing data-sharing contracts ([25]).

  • If the startup is an SMC, it gains lighter documentation duties.

  • If the system flags potentially bias (e.g. underdiagnosis in a minority group), the developers may legally use stored patient data to correct this bias ([16]).

Each case shows the Omnibus in action: timelines lengthened, dual compliance unified, and data usage rights clarified. Real-world examples like Owkin and Wandercraft illustrate how a medical AI product will fit into the post-Omnibus regime, with more flexibility to innovate. These scenarios are consistent with industry commentaries: for instance, Osborne Clarke emphasizes that the package “may accelerate the availability of suitably designated notified bodies” and help coordinate complex compliance pathways ([60]).

Nonetheless, companies must remain proactive. The Omnibus does not eliminate regulation – it reorganizes it. Life sciences firms should map out all intersecting rules (AI Act, MDR/IVDR, GDPR, HIPAA/clinical trial regs) to ensure the new reliefs are leveraged properly. The message from advisors is clear: cautious preparation is needed. Many Omnibus provisions remain to be negotiated, as the European Parliament and Council may amend or narrow them ([61]). Hence, while the Omnibus promises simplification, firms should continue preparing for full compliance under the existing framework until changes are finalized.

Data, Evidence and Analysis

The Digital Omnibus is driven by both qualitative feedback and quantitative analysis. The Commission’s impact assessment estimates “at least €5 billion in administrative cost savings for businesses by the end of the Commission mandate in 2029” from the Omnibus measures ([26]). This figure includes streamlining data and AI rules; another €1 billion could be saved via other related reforms (in IP and other domains) ([26]). In context, Europe’s health tech sector is a large part of the economy: one industry survey reported ~7,754 healthcare companies across Europe using AI ([29]), and thousands more allied firms in device/device software making up MedTech. Even a few percent reduction in bureaucratic cost per company can thus translate to substantial GDP gains.

Econometric modelling in the Commission’s Staff Working Document also breaks down the savings from specific simplifications. For example, eliminating the AI literacy obligation is estimated to relieve thousands of hours of SME compliance work ([62]). Aligning GDPR/NIS2 breach notifications is shown to save companies from redundant reporting (Annex IV, not shown here). A senior EU official noted that the entire digital package, including wallets and other tools, could “unlock another €150 billion in savings for businesses each year” ([3]) – although that figure mainly refers to broader digital identity solutions, it underscores the scale of potential efficiency gains.

On the flip side, regulators emphasize the risks of over-simplification. The EDPB/EDPS Joint Opinion (Feb 2026) welcomes many Omnibus goals but repeatedly warns against undermining rights ([4]) ([63]). For instance, they support raising breach notification thresholds (saving administration) ([23]), yet “strongly urge” co-legislators to reject any changes that would improperly narrow the GDPR’s scope ([4]). This tension – simplification vs. protection – is at the heart of the debate.

Academic and industry experts have similarly weighed in. Health IT analysts note that AI adoption in Europe is soaring. A 2024 report found some 1,752 AI-focused startups continent-wide, with the health sector growing rapidly ([64]). Hiring data shows pharma/health jobs in AI rose even while others plateaued ([29]). This growth context underscores why the EU views regulatory agility as vital. Yet, surveys of medtech CEOs (e.g. by MedTech Europe) caution that regulatory fragmentation remains a concern. Their 2025 consultation response emphasized that “horizontal digital legislation [must be] aligned” to allow innovation to “translate efficiently into safe and effective technologies for patients” ([65]). The Omnibus directly addresses this call by aiming for legal coherence – for example, by explicitly aligning AI with medical device law ([45]).

In terms of future impact, if the proposals are adopted largely as-is, it marks a shift towards more agile, impact-driven regulation in the EU. The EU has often been seen as strict and rigid on digital tech; the Omnibus signals a recognition that rules must evolve in light of implementation realities ([2]). Life sciences companies should thus integrate these legislative trajectories into their 5- to 10-year strategies. R&D pipelines, data governance policies, and compliance budgets may need adjustment. Those who anticipate the new timelines and tooling (e.g. sandboxes) can gain a competitive edge, while those who assume old deadlines remain in force risk misallocating resources.

Discussion and Future Directions

The Digital Omnibus arrives at a moment of overlapping transformations in EU life sciences regulation. Besides the digital package, the EU has signalled upcoming revisions to the MedTech regime itself (a “MDR/IVDR Relief” proposal in December 2025) and continued evolution of health data frameworks (e.g. the upcoming Health Data Space). Together, these indicate a broad desire to make compliance more realistic for innovation-intensive sectors.

Key takeaways for stakeholders:

  • Regulators’ stance: Both the Commission and legislative bodies acknowledge industry pain points. EU Council conclusions in March/June 2025 explicitly demanded a “simplification and better regulation agenda” for digital rules ([66]). MEPs have drafted amendments (likely to become law) that emphasize avoiding overlap and burden on SMEs/SMCs ([67]) ([18]). At the same time, the Parliament’s think-tank warns simplification must not “upset the fragile equilibrium” of rights protections ([27]). In negotiations, expect close sentences ensuring any easing is strictly technical.

  • Industry adjustments: Companies should audit their portfolios to identify which products/systems are affected. For each AI-enabled product, firms should recalc its risk category and new compliance deadline. They should also prepare for integrated conformity: ensure their Quality Management System can absorb AI Act requirements alongside MDR/IVDR ones. Pseudonymization tools and robust de-identification will become even more strategic if the personal data definition narrows, though companies must also plan for the scenario that it might not.

  • Use of sandboxes and guidance: Life sciences innovators should engage with EU-level sandboxes and provide sector-specific feedback during ongoing consultations. For example, the Staff Working Document indicates forthcoming sectoral guidance from the AI Office on research exemptions ([14]). Companies developing cutting-edge AI in medicine should be vocal about their practical needs during guideline drafting.

  • Investment and Innovation: The cost savings estimated (billions in aggregate) could be reinvested in R&D. The European Commission anticipates that firms will spend “more time innovating and scaling-up” instead of paperwork ([3]). If the Omnibus does indeed reduce administrative drag, it may accelerate deployment of AI therapies, personalized medicine, or next-generation diagnostics.

Future Risks and Watchpoints:

  • Legislative load balancing: The Omnibus is explicitly horizontal, but life science companies also face tightened health-sector rules (e.g. stricter MDR/IVDR timelines, clinical trial regulations). There is a risk of oscillation between tightening in one area and easing in another. Strategic alignment between health-specific law and digital law will be crucial.

  • Scope of “simplification”: The Commission’s approach prides itself on preserving objectives. However, practical disputes will arise. For instance, defining when AI model updates count as “significant design changes” (triggering recertification) will be contentious ([40]). Similarly, what exactly qualifies as bias mitigation “incidental processing” under GDPR is nebulous. Companies will need careful legal interpretation, potentially requiring case-by-case coordination with authorities.

  • Global context: The EU is not alone in adjusting AI rules. Variants of “regulatory sandboxes” and standard-setting initiatives are emerging worldwide (e.g. US FDA’s AI action plan, ISO standards). Alignment or divergence with international norms will affect market access. A nimble EU regime could set a benchmark, but excessive caution from data regulators (EDPB/EDPS) could conversely hamper predictability.

Outlook: Overall, the Digital Omnibus marks an important evolution in EU policy: from one-size-fits-all to contextual regulation. For pharma AI and medical devices, the shift recognizes the sector’s complex dual regulation. The ability to innovate in healthcare increasingly depends on data and software; Europe’s success in digital health will partly hinge on how well these legislative tweaks are implemented. Moving forward, firms should plan for a transition period (through 2028) where Omnibus provisions gradually take effect. Engaging with industry associations (EFPIA, MedTech Europe) and regulators will help ensure the final laws strike the right balance.

Conclusion

The EU’s Digital Omnibus Simplification initiative is a landmark effort to untangle regulatory burdens at a critical juncture for life sciences innovation. By reworking the AI Act, GDPR and related laws in tandem, the EU aims to ensure that compliance costs do not stifle the very advancement of safe, effective AI-driven therapies and devices. The proposed changes – extended timelines for compliance, harmonised assessments, clarified data rules – respond directly to concerns voiced by pharmaceutical and medtech stakeholders. They promise concrete benefits: faster market entry, unified quality systems, clearer research pathways, and billions of euros in administrative savings ([3]) ([26]).

However, these simplifications come with caveats. The EDPB and EDPS remind us that simplification should not dilute fundamental rights ([4]). Industry players must therefore approach the new regime with measured optimism. They should leverage easier processes and additional flexibility (e.g. sandboxes, bias analysis), but not ignore the enduring principles of patient safety and data protection. Risk managers and legal teams will still need to map each AI application through the intersecting rulebooks of EU law, ensuring that benefit of the Omnibus is fully captured without compliance gaps.

Looking forward, the Digital Omnibus is a stepping stone. The Commission’s simultaneous “Digital Fitness Check” (launched Nov 2025) will examine the cumulative impact of EU digital laws, potentially leading to deeper reforms ([68]). For now, life sciences companies should stay abreast of the Omnibus’s legislative path, incorporate its timelines into planning, and continue engaging in the public consultation.

In summary, the Digital Omnibus initiative reshapes the EU compliance landscape for Pharma AI and Medical Devices. It reallocates the regulatory load – granting more flexibility and unifying processes – while reinforcing Europe’s commitment to trustworthy innovation. If successfully implemented, it may well spur a new era where European healthcare providers and patients gain access to cutting-edge digital solutions more efficiently, without compromising on the high standards that underpin the EU’s single market.

References: As detailed above, all key claims draw on official EU documents and expert analyses ([3]) ([37]) ([1]) ([9]) ([25]) ([4]). Specific citations are provided inline.

External Sources (68)
Adrien Laurent

Need Expert Guidance on This Topic?

Let's discuss how IntuitionLabs can help you navigate the challenges covered in this article.

I'm Adrien Laurent, Founder & CEO of IntuitionLabs. With 25+ years of experience in enterprise software development, I specialize in creating custom AI solutions for the pharmaceutical and life science industries.

DISCLAIMER

The information contained in this document is provided for educational and informational purposes only. We make no representations or warranties of any kind, express or implied, about the completeness, accuracy, reliability, suitability, or availability of the information contained herein. Any reliance you place on such information is strictly at your own risk. In no event will IntuitionLabs.ai or its representatives be liable for any loss or damage including without limitation, indirect or consequential loss or damage, or any loss or damage whatsoever arising from the use of information presented in this document. This document may contain content generated with the assistance of artificial intelligence technologies. AI-generated content may contain errors, omissions, or inaccuracies. Readers are advised to independently verify any critical information before acting upon it. All product names, logos, brands, trademarks, and registered trademarks mentioned in this document are the property of their respective owners. All company, product, and service names used in this document are for identification purposes only. Use of these names, logos, trademarks, and brands does not imply endorsement by the respective trademark holders. IntuitionLabs.ai is an AI software development company specializing in helping life-science companies implement and leverage artificial intelligence solutions. Founded in 2023 by Adrien Laurent and based in San Jose, California. This document does not constitute professional or legal advice. For specific guidance related to your business needs, please consult with appropriate qualified professionals.

Related Articles

Need help with AI?

© 2026 IntuitionLabs. All rights reserved.