IntuitionLabs
Back to ArticlesBy Adrien Laurent

FDA Pathways for AI SaMD: 510(k), De Novo & PMA Guide

Executive Summary

The regulatory framework for Software as a Medical Device (SaMD) in the United States offers three primary premarket pathways: 510(k), De Novo, and Pre-market Approval (PMA). Each path has distinct requirements and is chosen based on device risk, novelty, and the existence of a marketed predicate. For AI/ML-driven SaMD, these decisions must carefully weigh the device’s risk classification and technological maturity. Historically, the FDA’s 510(k) pathway has dominated, clearing the vast majority of moderate-risk devices as “substantially equivalent” to existing predicates ([1]) ([2]). By contrast, the De Novo pathway provides an option for novel moderate-risk devices without predicates ([3]); only a small fraction of AI SaMD have used De Novo (~2.9% of 691 reviewed cases) ([4]). PMA is reserved for high-risk class III devices (life-supporting or high-impact applications) ([5]) ([6]) and has been extremely rare for pure AI/ML software (~0.4% of AI devices) ([4]).

Selecting the appropriate path has major implications. The 510(k) route can significantly shorten time-to-market (often cleared within ~3–6 months) and avoids costly trials, but requires a very similar predicate device. De Novo filings take longer (typically ~9–12 months on average ([7])) but establish new device types for future predicates. PMA typically involves multi-year review and extensive clinical evidence, used only when safety risks mandate the highest scrutiny ([6]).

Recently, FDA’s digital health regulations have evolved to address AI-specific concerns. The FDA has introduced new guidance on AI-enabled lifecycle management, including requirements for pre-market plans for adaptive algorithms ([8]) ([9]) and detailed Good Machine Learning Practices (GMLP). The FDA now requires structured electronic submissions (eSTAR templates) for 510(k) and De Novo filings ([10]) ([11]) and has begun enforcing the updated Quality Management System Regulation (QMSR) in 2026 ([12]).

In sum, while accelerated innovation in AI/ML medical devices continues, manufacturers must align with longstanding FDA statutes and emerging AI-specific policies. This report synthesizes historical context, current practices, and future trends to guide stakeholders in choosing and preparing the appropriate regulatory pathway for AI/ML-driven SaMD. We examine each pathway in detail, analyze clearance data and case examples (e.g. IDx-DR, Viz.ai, Arterys, HeartFlow), and discuss evolving FDA expectations (e.g. draft AI guidance, IMDRF frameworks, EU AI Act) that will shape AI SaMD regulation going forward.

Introduction and Background

Software as a Medical Device (SaMD) – standalone software intended for medical purposes without being part of a hardware product – has grown rapidly with advances in artificial intelligence (AI) and machine learning (ML) ([13]). AI/ML technologies now empower diagnostics, decision-support, and therapy guidance across specialties. For example, autonomous AI can screen diabetic retinopathy ([14]), interpret radiology scans ([15]) ([16]), and monitor vital signs via wearable sensors ([17]). The FDA has recognized that SaMD uniquely straddles software innovation and medical performance, prompting both new guidance and adaptation of existing device laws.

Globally, regulators have sought harmonization on SaMD. The International Medical Device Regulators Forum (IMDRF) defined SaMD and developed risk-categorization frameworks, emphasizing the software’s clinical context and intended use ([13]) ([18]). For instance, an IMDRF matrix classifies SaMD by (1) the significance of information the software provides (inform, drive, diagnose/treat) and (2) the state of the healthcare situation (critical, serious, non-serious) ([19]). Category IV (highest risk) arises when software is used to “treat or diagnose” critical conditions, while Category I (lowest risk) aligns with informational outputs for non-serious conditions ([19]). Although the FDA does not formally assign IMDRF categories, such models inform risk assessment: a diagnostic image-analysis tool, for example, typically falls into a higher-risk category than a scheduling aid.

Under U.S. law, medical devices are categorized into Class I, II, or III by statutory risk (Box 1). Class I devices are low-risk (often exempt), Class II moderate-risk (requiring general and special controls), and Class III high-risk (require full PMA review) ([5]) ([6]). Most AI/ML SaMD reviewed so far have been classified as Class II. For example, the FDA explicitly designates “Retinal diagnostic software” as Class II (special controls) ([20]) – which corresponds exactly to AI tools like the IDx-DR system for diabetic retinopathy screening ([14]). The classification drives which regulatory pathway applies (described below).

Box 1. U.S. Medical Device Classes and Pathways. Devices are classified by risk: Class I (general controls, often exempt), Class II (general + special controls), Class III (life-supporting/high-risk, requires PMA). FDA regulates Seventeenth categories of SaMD under device authorities. Premarket pathways: 510(k) for demonstrating “substantial equivalence” to a predicate; De Novo for novel moderate-risk without a predicate; PMA for novel high-risk devices. A 510(k) submission shows that the new device is as safe and effective as an existing legally marketed device ([21]).By contrast, the De Novo pathway (established by law in 1997) enables a new device to be classified as Class I or II and creates a new product code when no predicate exists ([3]). The PMA process is the most stringent, requiring comprehensive clinical evidence of safety/effectiveness for Class III devices ([6]).

New legislative and guidance changes are shaping SaMD regulation. The FDA’s Quality Management System Regulation (QMSR) – aligning FDA QSR with ISO 13485 – became enforceable Feb 2, 2026 ([12]). All 510(k) and De Novo submissions must now use the electronic eSTAR template (mandatory for De Novo as of Oct 2025) ([10]) ([11]). In parallel, FDA’s 21st Century Cures Act (2016) carved out exemptions for certain software functions (e.g. image enhancement) that “merely adjunct to a clinical decision,” reducing regulatory burden on lower-risk digital tools. However, AI/ML functions that directly inform or make clinical decisions typically do not qualify for those exemptions. For example, the FDA explicitly noted that IDx-DR, an autonomous AI tool guiding retinopathy care without a clinician, was not exempt under the Cures Act and required full marketing authorization ([22]).

This report assumes devices are regulated by the FDA (U.S. market) and focuses on FDA pathways, but where helpful we note international context. Understanding the choice among 510(k), De Novo, and PMA is crucial for AI/ML SaMD sponsors: it determines evidentiary needs, review timelines, and market strategy. Subsequent sections detail each path, contrast them, and supply data-driven insight and examples. We then discuss AI-specific lifecycle issues (e.g. algorithm changes, “good ML practices”, demographic fairness) and future regulatory trends.

The 510(k) Pathway for AI/ML SaMD

Overview: The 510(k) Premarket Notification is the FDA’s most commonly used pathway for moderate-risk devices ([1]). A 510(k) submission must show that the new device is “substantially equivalent” (SE) to a predicate device on the market. Mathematically, sponsors demonstrate equivalence in intended use and either technological characteristics or performance data ([21]). The process assumes the predicate has been proven safe and effective; the agency then ensures the new device does not raise new safety or efficacy questions.

When It’s Used: For AI/ML SaMD, 510(k) is appropriate if:

  • A predicate device exists with essentially the same intended use and technology. For example, a new AI algorithm for chest X-ray nodule detection could use a cleared algorithm in the same category (e.g. other radiographic CAD tools) as a predicate.
  • The risk level is moderate (Class II) or lower. In practice, most AI/ML SaMD cleared via 510(k) have been classified Class II. (Notably, Class I “reserved” devices can only use 510(k) if specifically listed as requiring it.)
  • The new software’s differences from the predicate are not substantial enough to alter mode of action or risk profile. In other words, the core clinical function remains essentially the same.

The 510(k) pathway is much faster and less burdensome than De Novo or PMA, making it attractive for software updates or incremental innovations. According to regulatory reviews, the median 510(k) clearance time for AI/ML devices is on the order of 142 days, i.e. about 4.5 months ([23]). In 2025, 24% of AI/ML 510(k)s were cleared in <90 days and only 22% took >200 days ([23]), reflecting efficiency. By comparison, De Novo reviews average nearer 10–11 months ([7]).

Evidence Requirements: A 510(k) file typically includes a detailed software description, performance bench testing, and often some clinical or retrospective test data. The FDA has clarified that for AI/ML SaMD, performance should be evaluated on representative datasets covering the intended patient population. Since many FDA-cleared AI tools target imaging analysis, companies often provide accuracy/sensitivity/specificity metrics from retrospective image sets. Notably, the FDA now expects robust subgroup analyses; reviewers will flag submissions lacking demographic performance breakdowns as a bias risk (see Implications) ([24]). Discussing real-world performance during premarket review aligns with FDA’s emphasis on transparency for AI outputs.

Risk Controls: Even for 510(k) devices, FDA often imposes software-specific special controls. For example, 21 CFR 886.1100 classifies retinal diagnostic software (like IDx-DR) as Class II with controls requiring validation of algorithm performance, documentation of data inputs, and cybersecurity measures ([20]). AI/ML SaMD sponsors should anticipate similar requirements: thorough documentation of algorithm training, independent validation, and risk management plans (per GMLP principles) are now expected content in 510(k) submissions ([25]) ([6]). FDA’s new draft AI-guidance (Jan 2025) explicitly calls for lifecycle risk management over total product lifecycle ([26]), signaling that 510(k) applicants must propose how model updates will be controlled (see PCCP below).

Case Examples – 510(k): Many recent AI/ML SaMD cleared via 510(k) illustrate common scenarios:

  • Apple Watch HTN Notification (Cardiovascular ML Software) – In 2025, Apple gained FDA clearance for an AI algorithm on its Watch that non-invasively flags possible hypertension ([17]). The FDA classifies this product (“Hypertension machine learning–based notification software,” Product Code SFR) as Class II ([27]) ([28]). This means Apple could submit a 510(k) (which it did) rather than De Novo or PMA.
  • Viz.ai ICH Plus (Intracerebral Hemorrhage Quantification) – Viz.ai’s 2024 clearance of “Viz ICH Plus” is another example ([29]). The FDA cleared a 510(k) for this deep-learning algorithm that measures hemorrhage volume on CT scans. (As a Class II tool, it justified 510(k).)
  • Arterys Cardio DL (Cardiac MRI Segmentation) – Arterys received a 510(k) in 2017 for their cloud-based Cardio DL application, the first cleared deep-learning MRI tool ([16]). It automates ventricular segmentation on cardiac MRI. Since predicate MRI segmentation tools existed, Arterys could show substantial equivalence and avoid PMA.

These and hundreds of other devices reflect 510(k)’s ubiquity for AI SaMD: indeed, in a review of 691 FDA-cleared AI/ML devices, roughly 97% used 510(k) ([2]) ([4]). 510(k) is thus the “default” route whenever an existing category applies. Table 1 (below) summarizes the 510(k) pathway alongside the other routes.

The De Novo Pathway for AI/ML SaMD

Overview: The De Novo pathway is intended for novel, low-to-moderate risk devices that have no marketed predicate. It was established by the FDA Modernization Act of 1997 to “foster innovative devices by providing an intermediate pathway between 510(k) and PMA” ([3]). After FDASIA (2012), de novo can be requested directly, without first failing a 510(k) ([30]). A granted De Novo request results in: (1) the device being classified as Class I or II (never Class III), (2) creation of a new FDA product code (special controls defined in regulations), and (3) the device becoming a first-of-kind predicate for future 510(k) submissions ([31]).

When It’s Used: De Novo is appropriate if the device is moderate risk (Class II), has no suitable predicate, and does not qualify as exempt or low-risk. In practice, AI/ML SaMD sponsors consider De Novo if their algorithm is first-in-class for a particular use. For example, the autonomous IDx-DR retinopathy scanner was granted a De Novo in 2018 (DEN180001) because no prior automated diabetic retinopathy device existed ([32]) ([33]). The De Novo path is also chosen for novel functions in cardiology, AI-based pathology tools, and other arenas without existing claims.

However, if an AI tool is considered “high-risk” (life-sustaining or high-impact diseases), De Novo is not available. High-risk AI goes via PMA. Additionally, De Novo is meant for new categories that can be reasonably mitigated by general/special controls. For example, Dermasensor’s handheld melanoma detection device received De Novo in Jan 2024 (DEN230008) under Class II ([34]), because algorithmic colorimetric analysis of skin is novel but not inherently life-critical. (Table 2 presents real-world examples.)

Evidence Requirements: A De Novo submission requires substantial evidence of safety and effectiveness – essentially like a mini-PMA. This typically means bench and clinical data demonstrating performance. FDA’s 2017 De Novo guidance specifies that sponsors should provide “comprehensive information” including test data, risk analysis, and proposed special controls ([31]). In practice, AI/ML De Novo applications often include human clinical validation. For instance, IDx-DR’s FDA summary reported a pivotal clinical study of 900 patients ([14]). Dermasensor and other De Novo-authorized AI devices similarly provided prospective or retrospective diagnostic accuracy data. As noted by Aboy et al., De Novo applicants now prepare dossiers akin to PMA applicants, because no predicate exists to rely on ([31]) ([35]).

Special Considerations: A successful De Novo yields a new predicate. Future entrants can then pursue 510(k) if they match that first device’s intended use and tech. This “predicate-ifying” effect is a major benefit. However, innovators must be prepared for longer review times and potentially tougher review standards. On average, De Novo reviews have been around 338 days (median 309 days) in recent years ([7]) – roughly 2–3 times the duration of a typical 510(k). Also, because De Novo devices often create new categories, FDA may require creating novel special controls (e.g. labeling requirements, performance standards) tailored to the AI risk. White papers stress that including post-market monitoring plans (like Performance Goals) and engaging early with FDA (Q-Submissions) can smooth the process.

Case Examples – De Novo: Notable AI-driven De Novo products include:

  • IDx-DR (Digital Diagnostics) – An autonomous diabetic retinopathy screener. DEN180001 was granted April 2018☑ ([32]) after a pivotal clinical trial (87.4% sensitivity) ([14]). Subsequently, IDx-DR became the predicate for future retina-AI 510(k)s.
  • Notal Vision Home OCT System – A home-based retinal OCT for macular degeneration detection. Cleared via De Novo in May 2024 (DEN230043) as no home OCT device existed previously.
  • DentalMonitoring – A remote orthodontic monitoring software for braces, cleared by De Novo (DEN230035, May 2024) . No predicate for this remote AI workflow existed.
  • DermaSensor (DermaSensor Inc.) – A handheld pigmented-lesion analyzer that won De Novo classification (DEN230008, Jan 2024) ([34]). It analyses optical signals to predict melanoma risk. FDA determined general/special controls could ensure safety for this Class II device.

According to one analysis, only ~2.9% of AI/ML devices used De Novo ([4]), reflecting that most rapid SaMD innovations have found predicates. However, De Novo remains vital for cutting-edge SaMD categories. Table 1 (previous section) and Table 2 (below) include examples of De Novo-clearances.

The PMA Pathway for AI/ML SaMD

Overview: Premarket Approval (PMA) is the FDA’s most stringent review for Class III devices ([5]) ([6]). Statutorily, Class III includes devices that “support or sustain life” or “present a potential unreasonable risk” ([5]). To proceed under PMA, an AI/ML SaMD would typically involve high-risk diagnoses or direct therapy decisioning. PMA requires a full application of clinical (and non-clinical) data establishing safety and effectiveness.

When It’s Used: AI/ML software is rarely classed as Class III. However, if a software function is deemed life-critical, PMA becomes necessary. For example, any AI algorithm whose failure could lead to death or serious harm (without clinician safety nets) would likely trigger Class III. To date, few if any pure AI-only devices have gone through standard PMA. Instead, many AI features are embedded in higher-risk hardware systems: e.g. a hospital ventilator’s AI may fall under the device’s PMA. A clear stand-alone AI example that used PMA is HeartFlow FFR-CT. This 2014 FDA-approved software computes fractional flow reserve from coronary CT images – an inherently high-stakes diagnosis. The agency “approved” (PMA-level review) HeartFlow’s FFR-CT as a novel Class III tool for ischemia ([36]). Other PMA cases are even rarer, and they generally involve algorithms integrated with Class III devices.

Evidence Requirements: The PMA bar is extremely high. Clinical trials are the norm, often randomized controlled trials. AI-specific extra evidence (like learning curve or user training) might be scrutinized. For example, HeartFlow’s approval was accompanied by multiple prospective studies demonstrating its diagnostic accuracy ([36]). A PMA submission for an AI algorithm must justify not only algorithmic performance but also that any risk (e.g. false-negative results) is acceptably low. The requirements are outlined in CFR 21 Part 814 and typically require “valid scientific evidence” (clinical and bench data) ([6]). In practice, any AI/ML software that truly requires a PMA would face PMA expectations akin to drugs – something most developers avoid unless needed.

Case Examples – PMA: Pure AI PMAs are scarce. Aside from HeartFlow FFR-CT (Class III imaging analysis) ([36]), other AI features have piggybacked on device PMAs or used combination product reviews. For instance, advanced arrhythmia-detection algorithms in implantable defibrillators are covered under the PMA of the device itself, rather than under a separate software submission. Another example is the FDA’s authorization of the Dexcom G7 continuous glucose monitor (2023), which includes an algorithm that automates dosing suggestions. The G7 was cleared via PMA, though largely because as a standalone CGM it is Class II/III “system” combination (the algorithm is a software component of a class III device).

In summary, sponsors of AI/ML SaMD generally aim to stay in Class II to use 510(k) or De Novo. If sponsors assess their function as Class III, they must be prepared for the full PMA process. Table 1 highlights that PMA is designated for “high-risk/critical” uses (IMDRF Category IV) ([37]) ([6]). The consequence is that a device with similar indications but lower risk can complete a 510(k), while truly critical AI devices must face PMA-level scrutiny.

Regulatory Pathway Comparison

Basis of Pathway Selection: Choice among 510(k), De Novo, and PMA depends on three key dimensions: (1) Risk Level of the device (advantaged by FDA Class / IMDRF category), (2) Novelty/Predicate (whether a legally marketed predicate exists), and (3) Intended Use (e.g. diagnosis vs inform). Table 1 below (adapted from industry guidance ([37])) summarizes these considerations:

PathwayWhen It AppliesTypical Risk (Device Class/IMDRF)Market Effect
510(k) ClearanceA predicate device with the same intended use/technology exists. Device is low–moderate risk (usually Class II or “reserved” Class I). The new design is SE to predicate.Moderate-risk (FDA Class II; IMDRF II–III)Rapid clearance (often months); device declared “substantially equivalent”; avoids new PMA trials.
De Novo ClassificationNo suitable predicate exists. Device is low–moderate risk but novel (cannot rely on any existing 510(k)). Intended use is clear and not deemed high-risk.Moderate-risk (Class I/II; IMDRF II–III)Creates new device type and predicate for future 510(k). Longer review (~9–12 months) with more evidence needed.
Premarket Approval (PMA)Device is high-risk or life-critical (Class III; e.g. influences life-support decisions). Intended use requires proof of safety/effectiveness via comprehensive data.High-risk (Class III; IMDRF IV)Lengthy review (often 1–3 years); must submit full clinical trial evidence. Device is “approved” rather than just cleared.

Table 1. Summary of FDA regulatory pathways for SaMD. 510(k) is used when substantial equivalence to an existing device can be shown ([37]); De Novo when no predicate exists but risk is low/moderate ([3]); PMA for high-risk (Class III) devices ([6]).

Table 1 underscores the “default” nature of 510(k) for AI/ML SaMD. Aboy et al. report that ~99% of all devices use 510(k) ([1]), and for AI/ML specifically, nearly 97% follow 510(k) ([2]) ([4]). In practice, very few AI/ML SaMD actually meet the threshold for PMA. This skew means sponsors often structure their product or indication to fit a Class II paradigm if possible – for example, by limiting labeling to aiding diagnosis rather than making final decisions.

Evidence and Burden: As reflected in Table 1, the evidence burden escalates from 510(k) → De Novo → PMA. For 510(k), demonstrating equivalence often means bench testing and moderate clinical validation. De Novo requires standalone demonstrations of safety/effectiveness (often with substantial clinical data since no predicate exists). PMA typically mandates large clinical trials. The FDA’s risk-based guidance now also ties evidence level to risk impact: devices that “drive” important clinical decisions need stronger clinical evidence than mere informative tools ([38]) ([33]). In short, an AI that only informs a clinician (low risk) may clear via 510(k), whereas an AI that treats/diagnoses major conditions may need De Novo or PMA and robust trials.

Data on AI/ML SaMD Clearances

Empirical analyses underscore these trends. A comprehensive review identified 691 FDA-cleared AI/ML devices (through Oct 2023) ([39]). Key findings were:

  • Dominance of 510(k): By far the most common pathway. ~97% of AI/ML devices were cleared via 510(k) ([39]) ([4]). Only ~2.9% used De Novo and ~0.4% used PMA ([4]).
  • Specialty Focus: Radiology accounted for the majority. The 2025 data showed ~71.5% of new AI clearances in radiology ([15]), reflecting ample image data availability. Cardiology and neurology trailed.
  • Rise over Time: AI/ML SaMD approvals surged post-2018 ([39]), reflecting technological advances. In 2025 alone, one analysis found 295 AI/ML 510(k) clearances ([23]) — nearly one per weekday. This rapid growth emphasizes why streamlined pathways (510/k) are crucial for timely innovation.

Figure 1 (below) schematizes pathway usage among these devices: essentially all AI SaMD cleared to date have used 510(k) or De Novo, with PMA nearly zero. (HeartFlow FFR-CT and a few other legacy high-risk cases are notable exceptions ([36]).)

Figure 1. Distribution of FDA regulatory pathways for AI/ML-enabled SaMD (through Oct 2023). Data from Joshi et al. (2024) ([39]) ([4]). The 510(k) pathway is by far dominant.

Other data points: median review times for De Novo are about 338 days (309 median) ([7]) — roughly 2–3 times longer than 510(k) reviews on average – and about 15% shorter than contemporary PMA reviews. In 2025, the median 510(k) review for AI devices was ~142 days ([23]). Regulatory experts note that while FDA has goals (e.g. 90-day review for 510(k)), actual reviews can vary based on complexity and completeness. Notably, the FDA’s new eSTAR template has eliminated Refuse-to-Accept delays for 510(k) and De Novo, speeding eligibility checks ([40]). However, for AI devices especially, a poorly documented submission (e.g. missing algorithm details or validation) can still incur hold times. Data also show that about 10% of 2025 AI clearances included predetermined change control plans (PCCPs) for future algorithm updates ([41]), reflecting FDA’s encouragement of such planning for adaptive AI.

AI/ML Lifecycle and Regulatory Considerations

Beyond the initial pathway, AI/ML SaMD raise unique issues across the product lifecycle. The FDA has published several guidance documents relevant to AI SaMD (Table 3) and is actively shaping policy (e.g. with the Digital Health Center of Excellence). Key points include:

  • Algorithm Modifications and PCCP: AI models may be updated post-market (as new data accrue). The FDA’s 2021 “Predetermined Change Control Plan” (PCCP) concept encourages manufacturers to pre-specify anticipated modifications in their submission ([9]) ([25]). For example, a PCCP would detail what kinds of retraining or performance drift mitigation a device will undergo, and how these changes will be tested and documented. Any change outside an approved PCCP would require a new submission. Thus, well-designed PCCPs are becoming a commercial necessity: they allow iterative learning while avoiding repeated 510(k)s for routine updates. FDA’s 5 guiding principles emphasize that PCCPs should be focused/limited, risk-based, and evidence-driven ([42]) ([43]). Sponsors should adopt PCCPs early and describe them in premarket documentation (many recent De Novo approvals explicitly included them).

  • Good Machine Learning Practices (GMLP): FDA and international bodies emphasize development controls. This includes high-quality, well-curated training datasets; transparent model documentation; robust validation plans; and provisions for human oversight ([44]) ([26]). The FDA’s draft 2025 guidance on “AI-Enabled Device Software Functions” stresses a total product lifecycle (TPLC) approach to risk management ([26]). Submissions now commonly include a description of how data were collected, how bias was assessed, and how performance is monitored over time. Notably, submissions are expected to include demographic representation in test data, as the FDA warns that omitting demographic analysis is no longer acceptable under equity grounds ([26]) ([44]).

  • Cybersecurity and Software Supply Chain: Like any software medical device, AI SaMD must meet cybersecurity standards. Recent FDA guidance mandates software bill-of-materials (SBOM) for networked devices and pen-testing of externally facing interfaces. (By mid-2025, incomplete SBOMs were flagged during eSTAR checks.) FDA’s updated QMSR (2026) also requires documented processes for software design, maintenance, and change control which apply to AI development life-cycles ([12]). These frameworks are technology-neutral but are vigorously enforced, given the interconnected nature of modern AI platforms.

  • Post-Market Surveillance and Real-World Performance: AI/ML devices may degrade over time or behave unpredictably on new populations. The FDA has signaled it will require more robust post-market monitoring for AI SaMD. For instance, certain clearances now include conditions for periodic performance reporting or registries (see case studies). The FDA’s RWE (Real-World Evidence) guidance, finalized in Dec 2025, encourages manufacturers to plan data collection on clinical use. These trends suggest that, even after clearance, sponsors must actively collect and evaluate real-world data to ensure ongoing safety/effectiveness of AI algorithms.

  • International Regulatory Interplay: U.S. sponsors should also be aware of global trends. In the European Union, the new AI Act (effective 2026–28) will classify all AI/ML-based medical software as “high-risk AI” by default ([45]). High-risk AI systems face stringent obligations on data governance, transparency, and human oversight (on top of existing MDR requirements) ([46]). For example, EU CE-marked SaMD must now include detailed AI-system documentation and show demographic representativeness in training data. Similarly, the FDA and other agencies are discussing international harmonization (IMDRF, IEC standards, etc.) of SaMD quality systems. U.S. firms often plan global strategies that can leverage FDA-clearances for EU filings (via Mutual Recognition Agreements), but they must note that EU notified bodies also scrutinize software, and in practice AI features may trigger additional scrutiny in EU under MDR.

Table 3. Select recent FDA guidance relevant to AI/ML SaMD. Key topics include AI lifecycle management, data transparency, and quality systems. (FDA guidances are non-binding; compliance ensures ‘least risk’ submissions.)

Guidance TitleIssuedRelevance
Predetermined Change Control Plans (PCCPs) for MLMDJan 2021 (FDA/HC/MHRA)Establishes guiding principles for manufacturers to plan and control future AI algorithm changes to ensure safety/effectiveness ([9]) ([25]).
AI-Enabled Device Software Functions: Lifecycle Mgmt… (Draft)Jan 2025 (FDA Draft)Recommends what to include in AI/ML device marketing submissions, focusing on risk management over the total product lifecycle ([26]).
Software as a Medical Device: Clinical Evaluation (Final)Apr 2017 (FDA)International consensus (IMDRF) on clinical evidence standards for SaMD, covering performance validation basics.
Device Software Functions including AI/ML (Draft Cures Act)Sept 2019 (FDA Draft)Clarifies definitions and exclusions for “software as a device,” laying groundwork for AI/ML device distinctions (e.g. when software is regulated).
Quality System Regulation ModernizationFeb 2026 (FDA)Final rule updating FDA QSR to align with ISO 13485:2016; affects software design controls and supplier management for all devices, including AI/ML SaMD ([12]).

These guidance updates (and others) make clear that the FDA expects AI/ML developers to adopt rigorous software engineering and transparency practices from design through post-market. Manufacturers should monitor FDA communications (e.g. CDRH’s AI-enabled devices list ([47])) and engage with FDA early (through Pre-Submissions) to clarify expectations for their specific device function.

Case Studies and Real-World Examples

To illustrate these pathways, consider the following representative AI/ML medical devices and their regulatory routes:

DeviceCompanyFunction (Indication)Regulatory ClassFDA Pathway (Year)References
IDx-DRDigital DiagnosticsAutonomous diabetic retinopathy screening (retinal images)II (software)De Novo (DEN180001, 2018)([32]) ([48])
Viz ICH PlusViz.aiAI quantification of intracerebral hemorrhage in CT scansII (software)510(k) (2024)([29])
Apple Hypertension AIApple Inc.AI-based hypertension risk detection on smartwatchII (software)510(k) (2025)([17]) ([28])
Arterys Cardio DL™ArterysAutomated cardiac MRI ventricle segmentation (deep learning)II (software)510(k) (2017)([16])
DermasensorDermaSensor, Inc.Handheld skin lesion analysis (melanoma detection)II (software)De Novo (DEN230008, 2024)([34])
Notal Vision Home OCTNotal Vision, Inc.Home retinal OCT for AMD monitoringII (software)De Novo (DEN230043, 2024)
HeartFlow FFR-CTHeartFlow, Inc.Non-invasive coronary ischemia assessment (CT-based FFR)III (software)PMA (2014)([36])

Table 2. Selected AI/ML-enabled SaMD and their FDA premarket pathways. IDx-DR (2018) pioneered autonomous eye exams via a De Novo clearance in Class II ([32]) ([48]). Arterys Cardio DL (2017) and Viz.ai’s ICH tools (2024) exemplify 510(k) clearances for advanced deep-learning imaging software ([16]) ([29]). Conversely, HeartFlow’s FFR-CT system (2014) illustrates a PMA route for a life-critical CFD-based algorithm ([36]). Apple’s recent HTN Notification shows a consumer-gen AI feature going through the 510(k) channel ([17]) ([28]).

These case studies highlight that when the use-case overlaps existing devices and the risk is moderate, sponsors successfully used 510(k). When a first-of-its-kind clinical function is introduced (as with IDx-DR or Home OCT), De Novo has been appropriate. The only example of PDE (PMA) shown is HeartFlow, reflecting its high-risk context. Each product’s clearance came with tailored evidence requirements and controls (e.g., IDx-DR included a clinical trial in its De Novo summary ([14]), while Arterys provided extensive validation against radiologist contours ([16])).

Discussion and Future Directions

Implications for Stakeholders: For AI/ML SaMD developers, pathway choice is pivotal. A 510(k) strategy demands finding a predicate and focusing on equivalence data; this can expedite clearance but locks in labeling and predicate limitations. De Novo requires investing in more upfront evidence and analysis but can yield category leadership. PMA should be anticipated only if the intended use is inherently high-risk (e.g. replacing or automating a critical clinical judgment). In all cases, planning for lifelong learning must begin early: a strong Predetermined Change Control Plan (PCCP) is now considered a must-have in any submission ([49]) ([9]). Moreover, the regulatory trend is moving toward greater scrutiny of AI performance in diverse populations; manufacturers should accumulate validation data on subgroups and be prepared to monitor bias post-market.

Regulators too balance innovation with safety. The surge in AI devices (hundreds per year) strains review capacity, prompting new tools (like eSTAR) and policies to focus reviewer attention on device-specific concerns (demographics, cybersecurity, training set validity). The FDA’s transparent AI-enabled devices list ([47]) helps clinicians and patients identify which approved tools use AI, but also signals spotlight to quality and adherence to guidance. Internationally, convergence (IMDRF, IEC, EU AI Act) is shaping a more uniform landscape; a U.S. grant or clearance may facilitate approvals in Europe or Asia, though differences remain (e.g., EU’s Notified Body scrutiny of Software Life Cycle under MDR).

Trends and Emerging Issues: AI/ML SaMD will continue to evolve. Key future directions include:

  • Adaptive and Continuous Learning: Regulators are actively exploring how to oversee algorithms that change after commercialization. The innovations in physician-aided learning and cloud updates will likely lead to more robust post-market requirements. We may see FDA require periodic submissions when models drift, or mandate AI “Algorithmic Change Protocols” as outlined in the Cures Act amendments. Real-world Performance Monitoring will become formalized through databases or registries for AI devices.
  • Large Language Models (LLMs) and Generative AI: The FDA has recognized that future SaMD may incorporate foundation models (e.g. GPT-style systems) ([50]). While current AI/ML in diagnostics is largely feature-based (images, signals), new devices may use generative models for clinical text analysis or patient interaction. This raises new questions, but the FDA and IMDRF are beginning to extend their frameworks to address LLM safety and transparency.
  • Societal & Ethical Oversight: As AI SaMD proliferates, public concerns (equity, privacy, algorithmic bias) are driving legislative attention. As noted, the FDA may soon demand demographic performance reports as a regulatory requirement ([24]). Future legislative acts or FDA guidance might impose requirements for reporting on outcomes (e.g. does an AI tool actually improve patient care?) in post-market. These “living matters” go beyond traditional premarket review.
  • Regulatory Innovation: Given the fast pace of AI, FDA is experimenting with novel approaches. For example, the (now-terminated) Digital Health Precertification program aimed to rate companies’ quality systems to expedite their products. Although the original Pre-Cert pilot ended, its spirit lives on in accelerated programs (Breakthrough Devices, Safer Technologies Program) and in the FDA’s collaboration forums (IMDRF for Software and AI). We may see new pathways or exemptions for well-controlled AI diagnostics in the future, but none have been codified yet.

In conclusion, choosing 510(k) vs. De Novo vs. PMA for an AI/ML SaMD is not merely a legal box-check: it reflects a holistic risk management decision. Stakeholders must match device capabilities to FDA categories and shape development practices accordingly. The insights in this report, drawn from regulatory history, data analyses, and real devices, aim to guide developers in navigating the complex FDA ecosystem in 2026. As AI/ML in healthcare matures, industry and regulators alike will need to balance innovation with robust oversight, ensuring patients benefit from AI’s promise in a safe, equitable, and transparent manner.

References

  • FDA, Center for Devices and Radiological Health, Office of Communication. Artificial Intelligence-Enabled Medical Devices. FDA AI Devices List (accessed Apr 2026) ([51]).
  • U.S. Food and Drug Administration. Quality Management System Regulation (QMSR) (final rule, 2026). Federal Register notice and guidance ([12]).
  • Aboy, M., Crespo, C., Stern, A. Beyond the 510(k): The regulation of novel moderate-risk medical devices… in the FDA’s De Novo pathway. npj Digital Med. 7, 29 (2024). npj Digital Medicine 2024 ([33]) ([3]).
  • U.S. Food & Drug Admin., Premarket Approval (PMA) database and definitions. FDA PMA page (accessed 2026) ([5]).
  • Joshi, G. et al. FDA-Approved AI/ML-Enabled Medical Devices: An Updated Landscape. Electronics 13(3), 498 (2024). MDPI electronics ([2]) ([4]).
  • 2025 Year in Review: AI/ML Medical Device 510(k) Clearances. Innolitics (Dec 2025). AI/ML 510(k) Roundup ([23]) ([15]).
  • U.S. Food & Drug Admin., Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations (Draft Guidance, Jan 2025). FDA Guidance ([26]).
  • U.S. Food & Drug Admin., Software as a Medical Device (SaMD) (CDRH webpage). FDA SaMD definitions (accessed Apr 2026) ([13]).
  • U.S. Food & Drug Admin., eSTAR Program. FDA eSTAR page (accessed Apr 2026) ([10]) ([11]).
  • U.S. Food & Drug Admin., Predetermined Change Control Plans for ML-Enabled Medical Devices: Guiding Principles (Jan 2021). FDA GMLP/PCCP Guidance ([9]) ([25]).
  • 21 CFR §886.1100, Retinal Diagnostic Software Device (FDA classification). CFR e-CFR (accessed 2026) ([20]).
  • Press Release, U.S. FDA (Apr 2018): FDA Approves IDx-DR AI for Diabetic Retinopathy Screening. Retinal Physician (Apr 1 2018). Retinal Physician News ([14]).
  • FDA Access Data, Device Classification – De Novo DEN180001 (IDx-DR). FDA De Novo database (accessed Apr 2026) ([32]).
  • Joshi et al., “Only 2.9% De Novo, 0.4% PMA” (see above) ([4]).
  • Aboy et al., De Novo review times (mean 338 days) ([7]).
  • Cardiovascular Business (Feb 2026): FDA Clears Apple Watch Hypertension AI. Cardiovasc. Bus. (accessed Apr 2026) ([17]).
  • FDA Product Classification (Hypertension ML-Notification Software, 21 CFR 870.2380). FDA Product Code SFR (accessed Apr 2026) ([27]) ([28]).
  • Viz.ai Press Release (Feb 2024): FDA Clearance for Viz ICH Plus. Viz.ai News ([29]).
  • Viz.ai Press Release (Feb 2022): FDA Clearance for Viz ANEURYSM. Viz.ai News ([52]).
  • Diehl, H. FDA Approves HeartFlow FFR-CT… (AngioplastyOrg, Nov 2014). FDA HeartFlow Press ([36]).
  • Biospace (Jan 2017): Arterys Receives 510(k) for AI Cardiac MRI Deep Learning. BioSpace News ([16]).
  • European Commission (2024). Regulation (EU) 2024/1689 (AI Act), Annex III. EU AI Act text (accessed Apr 2026) ([45]).
  • Additional FDA guidance (clinical evaluation, Cybersecurity, Software QMS, etc.) as referenced above.
External Sources (52)
Adrien Laurent

Need Expert Guidance on This Topic?

Let's discuss how IntuitionLabs can help you navigate the challenges covered in this article.

I'm Adrien Laurent, Founder & CEO of IntuitionLabs. With 25+ years of experience in enterprise software development, I specialize in creating custom AI solutions for the pharmaceutical and life science industries.

DISCLAIMER

The information contained in this document is provided for educational and informational purposes only. We make no representations or warranties of any kind, express or implied, about the completeness, accuracy, reliability, suitability, or availability of the information contained herein. Any reliance you place on such information is strictly at your own risk. In no event will IntuitionLabs.ai or its representatives be liable for any loss or damage including without limitation, indirect or consequential loss or damage, or any loss or damage whatsoever arising from the use of information presented in this document. This document may contain content generated with the assistance of artificial intelligence technologies. AI-generated content may contain errors, omissions, or inaccuracies. Readers are advised to independently verify any critical information before acting upon it. All product names, logos, brands, trademarks, and registered trademarks mentioned in this document are the property of their respective owners. All company, product, and service names used in this document are for identification purposes only. Use of these names, logos, trademarks, and brands does not imply endorsement by the respective trademark holders. IntuitionLabs.ai is an AI software development company specializing in helping life-science companies implement and leverage artificial intelligence solutions. Founded in 2023 by Adrien Laurent and based in San Jose, California. This document does not constitute professional or legal advice. For specific guidance related to your business needs, please consult with appropriate qualified professionals.

Related Articles

Need help with AI?

© 2026 IntuitionLabs. All rights reserved.