IntuitionLabs
Back to ArticlesBy Adrien Laurent

FDA SaMD Classification: AI & Machine Learning Guide

Executive Summary

The U.S. Food and Drug Administration (FDA) regulates AI/ML-based Software as a Medical Device (SaMD) under the same risk-based framework that applies to all medical devices ([1]) ([2]). In practice, nearly all FDA-authorized AI/ML SaMD have been classified as Class II (moderate-risk) devices. For example, in 2024 the FDA cleared 168 AI/ML-enabled devices – all designated Class II – via 510(k) (94.6%) or De Novo (5.4%) pathways ([3]) ([4]). AI/ML SaMD span a wide array of clinical functions (e.g. diagnostics, quantification, triage) but none to date have been deemed Class III (high-risk requiring PMA) ([5]) ([2]). The FDA’s oversight emphasizes premarket review of AI algorithms’ safety and effectiveness, and is now evolving toward total product lifecycle management (including postmarket algorithm changes) ([6]) ([7]). This report examines how AI/ML SaMD are defined and classified, reviews the regulatory framework (including recent FDA guidance and action plans), and analyzes trends in FDA-authorized AI devices. Key findings include:

  • Regulatory Scope: Software functions intended for diagnosis, treatment, or patient management are “medical devices” under the Federal Food, Drug, and Cosmetic Act if they meet statutory definitions. AI/ML algorithms used in clinical care generally qualify as SaMD unless explicitly exempted (e.g. administrative tools, basic wellness apps) ([1]) ([2]). Thus, AI/ML SaMD are regulated just like any other software medical device, using established pathways (510(k), De Novo, PMA) based on risk ([1]). The FDA often exercises enforcement discretion for very low-risk software (wellness and nonclinical apps) ([1]).

  • Risk Classification: The FDA uses a three-tier class system (I–III) based on risk to the patient. Class I is lowest risk (often exempt from premarket review), Class II is moderate risk (requires 510(k) or De Novo), and Class III is high risk (requires PMA) ([2]). AI/ML SaMD typically implicate non-trivial clinical decisions or diagnoses, placing them mostly in Class II. Indeed, all reviewed AI/ML SaMD to date have been treated as Class II devices ([5]) ([4]). For example, IDx-DR, an autonomous diabetic retinopathy detection tool, was classified Class II via the De Novo process ([8]). Similarly, Viz.ai’s intracranial hemorrhage quantification algorithm (Viz ICH Plus) and Lunit’s AI chest X-ray triage system were cleared as Class II via 510(k) reviews ([9]) ([10]). By contrast, no pure AI/ML SaMD has reached Class III, although the FDA notes that any device “support [ing] or sustain [ing] human life” or posing “unreasonable risk” could fall under Class III ([11]). Tables below illustrate the FDA classification schema and selected AI SaMD examples.

FDA Device ClassRisk DescriptionRegulatory PathwayExample AI/ML SaMD
Class ILow risk to patient (general wellness, specimen handling)Usually exempt; if not exempt → 510(k)Basic image viewers or wellness apps (enforcement discretion)
Class IIModerate risk (diagnosis, monitoring of non-life-threatening conditions)510(k) clearance or De Novo (new combination of features)IDx-DR autonomous fundus AI (Diabetic retinopathy);
Lunit INSIGHT CXR triage AI ([8]) ([10])
Class IIIHigh risk (life-sustaining support, high-stakes diagnosis)PMA (premarket approval)(None authorized to date purely as AI; e.g. AI in implantable devices could be Class III) ([11])
Selected FDA-Authorized AI/ML SaMDYearIntended Use / FunctionFDA Regulatory Pathway (Class)
IDx-DR (Autonomous Fundus Diagnostic) ([8])2018Detect “more than mild” diabetic retinopathy without physician oversightDe Novo (Class II)
Lunit INSIGHT CXR Triage ([10])2021Triage emergent conditions on chest X-ray via AI510(k) (Class II)
Viz.ai ICH Plus (Hemorrhage Quant) ([9])2024Quantify intracerebral hemorrhage volumes on CT510(k) (Class II)
Prenosis Sepsis ImmunoScore ([12])2024Predict risk of sepsis from demographics and vitalsDe Novo (Class II)

(Sources: FDA databases and press releases ([8]) ([9]) ([10]) ([12]).)

Introduction and Background

Regulatory Definitions: Under U.S. law, a medical device is defined by its intended use (diagnosis, treatment, etc.) and is subject to FDA regulation ([13]) ([1]). “Software as a Medical Device” (SaMD) refers to standalone software (including AI/ML algorithms) intended to perform medical functions without being part of other hardware ([13]) ([1]). The International Medical Device Regulators Forum (IMDRF) and FDA adopt similar definitions: if an AI software function meets the device definition (e.g. it analyzes patient data to inform diagnosis or therapy), it is regulated as a medical device ([13]) ([1]). The 21st Century Cures Act (2016) further clarified that certain low-risk software (like basic clinical decision support that lets clinicians independently review basis) is not a device, but AI/ML tools that independently analyze data typically are ([1]).

Global SaMD Framework: The IMDRF has developed a risk categorization framework for SaMD (Categories I–IV) by crossing “significance of information” (Inform, Drive, Treat/Diagnose) with healthcare condition seriousness (Non-serious, Serious, Critical) ([14]). Category I (lowest risk) corresponds to inform/regulate for non-critical conditions, and Category IV (highest) involves AI providing diagnoses or treatment for critical conditions ([14]). This IMDRF model primarily guides evidentiary and development rigor, whereas the FDA uses its traditional Class I–III system based on patient risk ([14]) ([1]). In practical terms, FDA oversight focuses on higher-risk SaMD. Very low-risk functions (e.g. customer support, specimen logistics) often face enforcement discretion, while diagnostic or monitoring algorithms undergo full review ([1]).

AI/ML in Medical Devices: AI and machine learning (ML) technologies have proliferated in healthcare as data-driven tools to detect patterns, quantify features, and make predictions from medical images, signals, and records ([15]) ([8]). Examples include AI for imaging (radiology, ophthalmology), physiology (arrhythmia detection), pathology (cancer grading), and predictive analytics (sepsis risk, cardiac events). As of early 2025, over 1,000 AI/ML-enabled devices have FDA authorization ([16]) ([17]). These devices have entailed extensive clinical testing and regulatory review, highlighting both transformative potential and the need for robust oversight. The FDA has acknowledged that AI SaMD pose novel challenges—algorithms can “learn” or evolve, and results may vary with different inputs—requiring updated regulatory strategies ([6]) ([7]).

Regulatory Evolution: Historically, AI-related SaMD fell under existing digital health policy. The FDA’s Digital Health Innovation Action Plan (2017) and subsequent initiatives (including establishment of the Digital Health Center of Excellence) emphasized that SaMD are assessed by traditional risk-classification methods ([15]) ([1]). Notably, the FDA’s 2021 AI/ML Action Plan laid out a “total product lifecycle” approach: it called for new guidances on premarket submissions (e.g. Predetermined Change Control Plans for algorithm changes), best practices for ML development, transparency, and postmarket monitoring ([18]). These efforts aim to balance innovation with patient safety as AI SaMD become more sophisticated.

FDA Classification of Medical Devices and SaMD

Device Classifications (I–III): By statute, all medical devices are categorized into three classes based on risk to patients ([2]). Class I devices are low risk and often exempt from premarket review; Class II are moderate risk and normally require a 510(k) clearance (showing substantial equivalence to a predicate device) or De Novo classification if no predicate exists; Class III are high risk and require full Premarket Approval (PMA) with evidence of safety and effectiveness ([2]). FDA regulation explicitly applies to software when it meets the device definition; thus AI/ML SaMD are classed by intended use/risk, not by whether they are software. In practice, moderate-risk AI SaMD are Class II (most image analysis, diagnostic decision-support, etc.), while trivial or supporting software might qualify as Class I if not exempted ([2]) ([1]). Only if an AI/ML function were life-sustaining (e.g. controlling a critical intervention) would Class III apply ([11]).

Differences for SaMD: The FDA has emphasized that SaMD must be evaluated “like any other device” ([1]). However, SaMD pose unique issues. Unlike hardware, software can be remotely updated, and ML models can drift as they are exposed to new data. This raises two classification-related concerns: (1) Software Updates: The FDA’s Class framework must accommodate iterative changes. Traditionally, manufacturers must submit a new 510(k) for “major” modifications affecting safety (21 CFR 807.81(a)(3)). The FDA’s 2019 discussion paper and 2024 draft guidance on “Predetermined Change Control Plans” (PCCP) explicitly acknowledge FDA will allow pre-specified algorithm updates without new submissions ([18]) ([7]). Thus, an AI SaMD may be cleared for future learning as long as changes occur under a reviewed PCCP. (2) Transparency and Bias: Risk classification also influences required evidence and labeling. For Class II AI SaMD, FDA may demand clinical validation studies, algorithmic bias evaluations, and clear performance claims. Past summaries show around 30% of AI devices mention sensitivity/specificity in labeling, and only ~16-30% include a PCCP ([19]), though FDA encourages broader transparency.

IMDRF vs. FDA Classes: It is important to note that the IMDRF risk categories (I–IV) do not map directly to FDA Device Classes (I–III). For example, an AI SaMD in IMDRF Category II (e.g. “drive diagnosis for non-critical condition”) would likely be FDA Class II (510(k)). A Category IV (highest risk, “treat/diagnose critical condition”) would almost certainly be FDA Class III under current rules, although no such pure AI product has been cleared yet ([11]). In essence, both frameworks are risk-based: the higher the potential clinical impact of the AI function, the higher the FDA class and the more stringent the review** ([2]) ([14])**.

Regulatory Pathways for AI/ML SaMD

510(k) and De Novo: Because most AI/ML SaMD have moderate risk, they typically enter the market via 510(k) clearance. The predicate device may or may not itself be AI-enabled. Notably, many recent AI SaMD have used other AI devices as predicates. For example, in 2024 97.5% of AI SaMD 510(k)s cited at least one predicate (81% cited a single predicate) ([19]). Only about 2.5% had no clear predicate, reflecting FDA’s firm reliance on substantial equivalence. When no predicate exists (truly novel AI mechanism), the manufacturer may pursue a De Novo classification to create a new device type. The first FDA-authorized AI device, IDx-DR, was classified via De Novo in 2018 ([8]). More recently, the Sepsis ImmunoScore was granted De Novo in 2024 as a completely novel AI diagnostic ([12]). According to analyses, approximately 5–10% of AI SaMD use the De Novo pathway, with the vast majority via 510(k) ([4]) ([19]). Table 1 above shows examples.

Premarket Approval (PMA): Class III devices must undergo the PMA process. To date, no standalone AI/ML SaMD has been approved via PMA. This likely reflects the fact that algorithms have been applied so far to tasks that, while important (e.g. imaging triage, disease screening), are not deemed immediately life-or-death on their own. If, in the future, an AI algorithm directly controlled a life-support machine or administered therapy, it would likely need PMA. The FDA has not signaled any special carve-outs: Class III remains reserved for the most critical functions ([11]).

Enforcement Discretion and Exemptions: The FDA has issued guidance listing categories of software/functions that are not medical devices (e.g. online medical textbooks, health IT, certain wellness apps) ([20]). AI SaMD for pure administrative purposes, or if they simply re-display data (like image viewers), may fall outside regulation entirely. Even if technically a device, very low-risk AI applications may benefit from enforcement discretion. For example, if an AI app only provides health education or lifestyle advice without specific medical recommendations, the FDA likely would not enforce premarket requirements ([1]). In contrast, any AI that makes clinical recommendations, diagnoses, or treatment decisions is almost always considered a regulated SaMD.

FDA’s Current Oversight Framework for AI/ML SaMD

Total Product Lifecycle Approach: In January 2021, the FDA published its Artificial Intelligence/Machine Learning (AI/ML)-Based SaMD Action Plan ([18]). The plan commits to a lifecycle approach: rather than only premarket review, the FDA will continue to monitor real-world performance and allow planned modifications. Key elements include: pre-submission of an AI “Algorithm Change Protocol” (later termed Predetermined Change Control Plan) describing how the device will learn post-clearance ([18]) ([7]); development of Good Machine Learning Practice (GMLP) standards; greater transparency to patients (algorithmic disclosure); and better methods to assess bias and real-world performance ([18]).

Recent Guidance and Drafts: Building on the Action Plan, FDA issued several guidance documents in 2021–2024. For example, FDA’s 2021 guidances on clinical decision support (CDS) clarified which algorithms are not devices ([1]), and on when modifications to AI SaMD require new submissions. In August 2024 FDA released a Draft Guidance on “Predetermined Change Control Plans for Medical Devices” ([7]). Although not finalized yet, this draft provides recommendations for manufacturers to specify, in a premarket submission, the types of algorithmic changes (e.g. retraining methods, update frequency) that would be implemented postmarket without a new 510(k). This approach officially acknowledges the mutable nature of AI algorithms and streamlines updates.

Postmarket Monitoring: The FDA also emphasizes postmarket data collection for AI devices, especially ones that “learn” from use. The Action Plan calls for performance monitoring to detect algorithm drift, bias in new populations, or cybersecurity events. In practice, FDA’s summaries show relatively few AI SaMD currently include postmarket surveillance plans explicitly, but this is an area of active development ([19]). The agency has signaled that future requirements may include routine reporting of real-world performance and even the use of FDA’s soon-to-be-expanded National Evaluation System for health Technology (NEST) for SaMD.

International Harmonization: The FDA participates in IMDRF and is moving toward global harmonization for AI SaMD. Notably, as of January 2026 the FDA withdrew its own 2017 guidance “SaMD: Clinical Evaluation” ([21]), hinting at plans to adopt newer international standards. Similarly, FDA’s approach to AI/ML SaMD shares similarities with frameworks in Europe and other jurisdictions (e.g. risk-based regulation under EU MDR, and China’s recent AI SaMD classification guidelines ([22])). For now, however, FDA continues to require each device sponsor to meet U.S. requirements and make comparisons to relevant U.S. classifications and standards.

Trends and Data on AI/ML SaMD Authorizations

A number of recent studies have quantified the DJI AI device landscape, providing insight into how FDA classification and pathways have been applied. For example, analysis of FDA summaries found that in 2024 all 168 AI/ML-enabled devices were Class II ([5]). Radiology devices dominated (74.4%), with cardiovascular imaging next (6.5%) ([5]). Of the 168, 159 (94.6%) were cleared via 510(k) ([4]). Most 510(k) devices were “traditional” submissions; only ~13% used Special 510(k), reflecting that AI SaMD are typically unique tech requiring comprehensive review ([23]). De Novo approvals accounted for 5.4% (nine devices in 2024) ([4]). No Class I or III approvals were reported.

Earlier data corroborate these trends. By late 2024, over 1,000 AI/ML-enabled devices had FDA authorization ([16]) ([17]). These devices span multiple clinical domains (imaging, cardiology, neurology, etc.) ([24]). A large majority are software-only (SaMD); some incorporate sensors or wearables but still focus on AI analysis. The historical review by Singh et al. (2025) of 1,016 FDA authorizations noted that the FDA’s conventional class and product codes provide only a broad categorization, not capturing nuances of AI use ([25]). However, they reaffirm that the FDA continues to use traditional Class I/II/III categories.

Predicates and Innovation Lineage: Analysis of predicate networks shows that most AI SaMD cleared via 510(k) had at least one prior device as reference ([19]). Many used predicates that themselves were AI-capable (in 2024, 64.5% of predicates cited were AI-based ([26])). Interestingly, about one-third of new AI devices still relied on conventional (non-AI) predicates, suggesting that “AI upgrades” to existing tools are common. Median review times were shorter for 510(k) (~151 days) than for De Novo (~372 days) ([27]). This gap may incentivize using 510(k) when possible, even for novel algorithms (by finding clever predicate justifications).

Quality Metrics: Published reviews have also highlighted gaps in transparency. In 2024 only ~30% of AI SaMD summaries reported key performance metrics (sensitivity/specificity) ([19]). Predetermined Change Plans were explicitly mentioned in about 16–17% of summaries ([19]), suggesting adoption is still emerging. Aggregate performance (e.g. accuracy) is often evaluated in the 70–90% range for many imaging tasks, with variations by indication. Demographic representation in training/validation is rarely fully disclosed (<20% provide complete data) ([28]). Meanwhile, the academic literature has begun mapping AI tools to FDA classifications. For example, Lee et al. (npj Digital Med, 2025) classified each device by intended function (e.g. triage vs diagnosis) and noted that FDA’s class system does not easily convey these details ([24]) ([29]).

Case Studies and Examples

  • IDx-DR (Diabetic Retinopathy Detection): In 2018, IDx, LLC received FDA clearance (De Novo) for IDx-DR, the first autonomous AI diagnostic system not requiring clinician input ([8]). It operates on retinal images and issues a binary refer/no-refer output for diabetic retinopathy. IDx-DR was classified as a new “Diabetic Retinopathy Detection Device” (Product Code PIB) ([30]), resulting in a Class II designation with special controls (the De Novo order). Its approval was based on a 900-patient pivotal study demonstrating safety and accuracy ([8]). This case highlighted FDA’s willingness to accept clinical endpoints as “ground truth” (i.e. trained graders) for AI performance.

  • Viz.ai Intracranial Hemorrhage (ICH) Algorithm: In early 2024, Viz.ai announced FDA 510(k) clearance of “Viz ICH Plus”, an AI algorithm that automatically identifies, labels, and quantifies bleeds on non-contrast head CT scans ([9]). The clearance (using a predicate radiology device code LLZ) confirmed FDA’s Class II designation for such computer-assisted quantification. Viz.ai’s platform uses deep learning and provides volumetric measurements of hemorrhages and midline shift to support stroke teams. This example shows a modern trend: computer vision algorithms for image quantification are regulators’ typical moderate-risk profile.

  • Sepsis ImmunoScore (Prenosis): In April 2024, Prenosis received FDA De Novo authorization for the Sepsis ImmunoScore, an AI/ML SaMD that predicts sepsis risk by integrating labs, vitals, and demographics ([12]). The press highlighted it as “the first-ever AI diagnostic tool for sepsis” ([31]) ([12]). Although the company’s announcement did not label the Class, a De Novo grant implies Class II with special controls. This novel analytic tool treats an extremely complex condition; its clearance underscores FDA’s openness to AI decision-support outside of imaging. Prenosis’s success required demonstrating that the algorithm’s outputs (risk scores) were meaningfully predictive of actual sepsis outcomes.

  • Lunit INSIGHT CXR Triage: In 2021, Lunit (a Korean AI vendor) received FDA 510(k) for its INSIGHT product, an AI-powered chest X-ray triage system ([10]). Specifically, FDA cleared it to detect emergent findings (e.g. pulmonary nodules, edema) and prioritize cases for radiologists. As a traditional radiology support tool, it was classified under conventional radiological device codes. The clearance path (510(k)) and press release confirm Class II status. Lunit’s case illustrates how existing predicates in imaging enable AI entrants – it claimed the first clearance “from its AI software lineup,” building on decades of digital radiography precedent ([10]).

These examples demonstrate that the FDA views AI/ML SaMD through familiar lenses. In each case, the device’s risk was judged similar to established diagnostics in the same category. No AI SaMD has been placed into an unprecedented high-risk class solely due to its “AI-ness.” Instead, classification decisions hinge on clinical context and possible impact, consistent with Class definitions ([2]) ([1]).

Implications and Future Directions

Adaptation to Rapid Innovation: As AI technologies evolve (e.g. large language models, continual learning systems), FDA classification will likely remain risk-based but with new nuances. Developers can expect FDA to demand robust validation across diverse patient populations and to monitor for algorithmic shift. The trend toward requiring upfront change protocols (PCCPs) means that future SaMD submissions must articulate not only current capabilities but plans for safe adaptation. FDA review periods may lengthen if novel AI applications fall outside easy predicates, but historically the 510(k) route has dominated due to its comparative speed ([5]).

Quality and Transparency: The data suggest that many AI SaMD still lack complete performance transparency – only a minority report broad metrics or demographic breakdown ([19]). Under pressure (from regulators, payers, and clinicians), companies may increasingly publish full validation results. The FDA has signaled greater scrutiny of bias; new guidelines may require bias analyses for AI devices impacting health equity. Postmarket surveillance will become more formalized: FDA and manufacturers are expected to harness real-world data to ensure devices continue to “perform as expected” ([18]) ([7]).

Global Regulatory Alignment: International harmonization is likely. The FDA’s withdrawal of older guidance ([21]) and engagement with IMDRF-led SaMD standards indicate that U.S. policy may converge with the EU’s software regulations and emerging norms (e.g. IEC 82304-1, IMDRF SaMD documents). Companies should monitor global requirements: for instance, Europe’s Medical Device Regulation (MDR) classifies all non-wellness SaMD as at least Class IIa, and China and Japan have been issuing specific AI device guidelines. While this report focuses on the FDA, modern AI/ML SaMD developers often seek multi-region approval, requiring harmonized classification strategies.

Potential Shifts: New guidance expected in 2025–2026 may refine classification criteria. For example, if real-time learning (continuous adaptation) becomes widespread, FDA might introduce a separate subclass or modified requirements for that category. The Agency’s stated goal is not to impede innovation: recent Axios reporting confirms FDA proposals to streamline AI device review, including allowing certain updates without new review ([32]). If implemented, such policies would maintain devices as Class II but reduce regulatory burden. Conversely, high-profile failures (e.g. algorithmic errors harming patients) could prompt tighter controls. Overall, the future implication is clear: FDA classification of AI/ML SaMD will remain squarely risk-based, but the evidentiary bar (especially for transparency and postmarket monitoring) will rise.

Conclusion

The FDA classifies AI/ML-based SaMD according to the same risk-based framework as other medical devices: essentially all authorized AI/ML SaMD to date have been Class II (moderate risk) and reviewed via 510(k) or De Novo pathways ([5]) ([4]). Historical context shows this approach evolving: initial AI devices (like IDx-DR in 2018) set precedents, and continued FDA initiatives (Action Plan, digital health center, PCCP guidance) have refined how AI SaMD fit into regulatory classes. Data from recent reviews (2024) confirm the trend – imaging and diagnostic tools, not therapeutic closed-loop systems, dominate the AI SaMD landscape, with no Class III devices reported. Looking forward, FDA’s decision processes will likely remain anchored in traditional classification, but new policies (like formalizing learning protocols) will tailor the framework to AI’s unique challenges. In essence, the FDA’s classification of AI/ML SaMD emphasizes patient safety through risk-based controls, while seeking to accommodate the agility and innovation of AI technology.

References: Peer-reviewed analyses, FDA guidance and announcements, and industry data support all claims above ([6]) ([33]) ([5]) ([2]) ([17]) ([8]) ([9]) ([12]) ([7]), ensuring this overview reflects current, evidence-based understanding of FDA AI/ML SaMD regulation.

External Sources (33)
Adrien Laurent

Need Expert Guidance on This Topic?

Let's discuss how IntuitionLabs can help you navigate the challenges covered in this article.

I'm Adrien Laurent, Founder & CEO of IntuitionLabs. With 25+ years of experience in enterprise software development, I specialize in creating custom AI solutions for the pharmaceutical and life science industries.

DISCLAIMER

The information contained in this document is provided for educational and informational purposes only. We make no representations or warranties of any kind, express or implied, about the completeness, accuracy, reliability, suitability, or availability of the information contained herein. Any reliance you place on such information is strictly at your own risk. In no event will IntuitionLabs.ai or its representatives be liable for any loss or damage including without limitation, indirect or consequential loss or damage, or any loss or damage whatsoever arising from the use of information presented in this document. This document may contain content generated with the assistance of artificial intelligence technologies. AI-generated content may contain errors, omissions, or inaccuracies. Readers are advised to independently verify any critical information before acting upon it. All product names, logos, brands, trademarks, and registered trademarks mentioned in this document are the property of their respective owners. All company, product, and service names used in this document are for identification purposes only. Use of these names, logos, trademarks, and brands does not imply endorsement by the respective trademark holders. IntuitionLabs.ai is an AI software development company specializing in helping life-science companies implement and leverage artificial intelligence solutions. Founded in 2023 by Adrien Laurent and based in San Jose, California. This document does not constitute professional or legal advice. For specific guidance related to your business needs, please consult with appropriate qualified professionals.

Related Articles

© 2026 IntuitionLabs. All rights reserved.