IntuitionLabs
Back to ArticlesBy Adrien Laurent

FDA's AI Medical Device List: Stats, Trends & Regulation

[Revised March 12, 2026]

Executive Summary

The FDA’s AI/ML-Enabled Medical Devices tracker (a public listing of FDA-authorized AI/ML-equipped devices) reflects a rapid, recent surge in such technologies under regulatory review. Only a handful of AI-based devices existed at the start of the 2010s, but approvals have escalated dramatically in the last five years: for example, the FDA cleared 6 AI/ML devices in 2015 versus 295 in 2025 ([1]). By end-2025 the cumulative total reached 1,451 AI/ML devices ([2]) (driven largely by approvals in imaging and signal analysis). The majority of these devices reside in Radiology (approximately 76% of listings, or 1,104 devices) ([2]) ([3]); cardiology accounts for about 9%, with neurology, hematology, and other specialties making up the rest. Nearly all cleared AI devices have entered via the 510(k) pathway ([4]) ([5]), reflecting reliance on substantial-equivalence rather than costly clinical trials.

Despite growth in device count, multiple analyses caution that evidence gaps remain. For example, a 2025 study found that <2% of FDA-cleared AI/ML devices were supported by randomized clinical trials and most 510(k) summaries lack details on study design, sample sizes, and demographics ([6]). Only about 5% of AI devices experienced any post-market adverse event report, and 5–6% were ever recalled (primarily for software bugs) ([7]) ([8]). A separate 2025 npj Digital Medicine study analyzing 1,016 authorizations found that transparency around AI/ML model characteristics remains limited, with many submissions lacking detail on training data composition and algorithm type ([9]). These findings underscore the need for stronger life-cycle oversight (such as FDA’s Predetermined Change Control Plans for algorithm updates, finalized in December 2024) and improved transparency.

This report delves into the history, current state, and future implications of FDA’s AI/ML device tracker: reviewing the regulatory framework, analyzing authorization trends and statistics (with tables), examining exemplar devices, and discussing the challenges ahead — all supported by extensive references. We cover both optimistic industry/innovation perspectives and critical safety/ethical viewpoints. The FDA’s AI/ML devices list has become an essential resource for signal, helping manufacturers, clinicians, and patients understand how AI is entering medical practice while highlighting where oversight must continue to evolve.

Introduction and Background

Artificial intelligence (AI) – broadly, software systems that mimic human intelligence – has been applied in medicine for decades, but recent advances in machine learning (ML) and data availability have accelerated its adoption in clinical devices ([10]). The FDA explicitly encourages development of innovative AI-enabled medical devices, provided they remain “safe and effective” ([11]). To foster both innovation and transparency, in 2019–21 the FDA launched a curated AI-Enabled Medical Device list, publicly cataloguing authorized devices that incorporate AI/ML ([12]). This tracker is updated periodically with each new clearance. Its purpose is to help stakeholders (manufacturers, researchers, clinicians, and patients) see which products use AI, understand regulatory expectations, and ensure that approved algorithms have undergone safety and effectiveness review ([13]).

Historical context: The very first FDA-cleared AI system was PAPNET (a Pap-smear rescreening tool) in 1995 ([14]), but few followed until the 2010s. As late as 2015, only half a dozen FDA devices used AI ([15]). Rapid growth began after about 2016: from 2016–2023 AI/ML device authorizations grew at an ~49% annual rate ([16]), reflecting both technological maturity and regulatory focus. By end-2023 the FDA’s tracker listed over 690 devices ([14]), growing to 950 by mid-2024 ([17]). Annual authorizations have surged year over year: 221 in 2023, 253 in 2024, and a record 295 in 2025 ([1]) ([2]). By the end of 2025, the cumulative total stood at 1,451 authorized AI/ML devices ([2]). Thus, AI/ML is no longer niche: these algorithms now assist in diagnostic imaging, ECG interpretation, laboratory analysis, and more.

Definition of AI/ML devices: The FDA uses a broad definition: devices on the list must use AI or ML “for one or more functions integral to clinical care,” as standalone software or embedded in hardware ([18]). In practice, identification relies on keywords. The FDA notes that the list is not comprehensive – rather, it includes products whose FDA summary descriptions (or classification codes) contain AI-related terms ([19]). For each device on the list, the tracker provides a link to the FDA’s database entry, which includes releasable information like safety/effectiveness summaries ([13]). (These summaries, however, often omit detailed study data ([6]).) The list encompasses both early “Guidance” apps (flagging images or data for clinician review) and more autonomous AI tools. To promote future transparency, the FDA has signaled plans to tag devices built on modern “foundation models” (e.g. large language models) once they appear, so users will know if e.g. an image-reader uses LLM-driven components ([20]) ([21]).

Regulatory oversight: AI/ML devices generally fall under the Software as a Medical Device (SaMD) paradigm.Most have been Class II devices cleared via the 510(k) pathway ([4]) ([5]), meaning the manufacturer demonstrated “substantial equivalence” to a predicate (pre-existing) device. Unlike PMA (premarket approval), 510(k) does not require new clinical trials, which has allowed faster entry of digital tools. The FDA’s Digital Health Center of Excellence (established 2017) has helped build expertise in evaluating these submissions. The FDA has issued a series of guidances specific to AI/ML devices. A Final Guidance on Predetermined Change Control Plans (PCCPs) for AI-enabled device software functions was published in December 2024, allowing manufacturers to pre-specify how algorithms will be updated post-market without requiring a full resubmission for each change ([22]). In January 2025 the FDA released a comprehensive draft guidance on AI-enabled device software functions applying a Total Product Life Cycle (TPLC) approach, covering submission recommendations, transparency labeling, and lifecycle management ([23]). In August 2025, the FDA collaborated with Health Canada and the UK’s MHRA to issue five guiding principles for PCCPs in ML-enabled devices ([24]). In 2025, 10% of AI/ML device clearances included PCCPs for iterative updates ([1]), signaling growing industry adoption. These steps reflect the FDA’s effort to balance faster innovation with patient protection, recognizing that AI tools may evolve over their lifecycle.

Regulatory Framework and the FDA AI/ML Devices Tracker

The FDA regulates AI/ML software under its existing medical device framework. In practice, many AI algorithms are deemed Class II (moderate-risk) and cleared via 510(k): indeed, by late 2023 roughly 97% of cleared AI/ML devices used 510(k) (vs. 2–3% via De Novo or PMA) ([4]). This mirrors general trends in medical software. The FDA’s stated requirement is simply that the device “meets applicable premarket requirements,” including a focused safety/effectiveness review ([13]). As an example, the FDA publicly notes that listed devices have passed a “focused review of the device’s overall safety and effectiveness, including evaluation of study appropriateness for the device’s intended use” ([13]). (However, FDA decision summaries often lack patient-outcome data ([6]).)

Influx of submissions has been brisk. Industry surveys and FDA reports indicate record volumes: 221 new clearances in 2023, 253 in 2024, and a record 295 in 2025 ([1]). As of December 2025, the FDA had authorized a cumulative 1,451 AI/ML devices since 1995 ([2]). These products span many categories, from “AI-assisted” imaging adjuncts to automated analyze-and-report systems. Radiology has traditionally dominated: 1,104 devices (76%) of all FDA AI authorizations address image analysis (X-ray, CT, MRI) ([2]). Cardiology (arrhythmia/ECG analysis) and neurology follow behind. In 2025, radiology accounted for 75% of new clearances, cardiovascular 8.8%, and neurology 4.7% ([1]) (Table 1). The remaining specialties are spread thinly across pathology, ophthalmology, gastroenterology, and others.

SpecialtyCumulative Devices (end-2025)2025 Share
Radiology1,104 (76%) ([2])75%
Cardiovascular~130 (9%) ([1])8.8%
Neurology~68 (5%) ([1])4.7%
Hematology~29 (2%)~2%
Other (misc.)*~120 (8%) – various (incl. pathology, ophthalmology, gastroenterology)~9%

*Table 1: Distribution of FDA-cleared AI/ML medical devices by medical specialty (cumulative through end-2025, and 2025 annual share) ([2]) ([1]). “Other” covers all remaining specialties.

This skew toward imaging reflects data availability and early digital adoption in radiology ([25]). In contrast, fields like ophthalmology or dentistry have very few AI devices listed. Among companies, the largest medtech firms lead adoption. As of end-2025, GE HealthCare leads with 120 cumulative AI device authorizations, followed by Siemens Healthineers at 89, Philips at 50, Canon at 45, and United Imaging at 38 ([2]). Among AI-focused startups, Aidoc has 31 authorizations and DeepHealth 28. Notably, in 2025 alone, Shanghai United Imaging Healthcare led all manufacturers with 10 new clearances, while 183 manufacturers had only a single clearance, reflecting a thriving startup ecosystem ([1]). Table 2 summarizes cumulative device counts for top companies:

CompanyCumulative Devices (end-2025)
GE HealthCare120 ([2])
Siemens Healthineers89 ([2])
Philips50 ([2])
Canon45 ([2])
United Imaging38 ([2])
Aidoc31 ([2])
DeepHealth28 ([2])

Table 2: Top companies by cumulative FDA-cleared AI/ML devices through end-2025 (source: The Imaging Wire / FDA data) ([2]).

Data Trends and Analysis

The raw data behind the FDA tracker reveal clear trends. From only 6 devices in 2015, the annual count grew to 91 in 2022, then 221 in 2023, 253 in 2024, and a record 295 in 2025 ([1]). By July 2025 cumulative authorizations exceeded 1,200 ([26]), and by end-2025 stood at 1,451 ([2]). The 2025 cohort came from 221 unique manufacturers, with 62% classified as Software as a Medical Device (SaMD) and 63% diagnostic in nature ([1]). The FDA updates the list roughly quarterly, with the most recent entry dated December 30, 2025.

Almost all FDA-cleared AI devices are decision-support tools rather than fully autonomous diagnosticians. They typically flag images or data for clinician review. Consequently, safety signals in the tracker are subtle. Published analyses indicate that pre-market evidence is often limited: a 2025 JAMA study of 691 devices (through July 2023) found that 46.7% of FDA summaries did not even describe the study design, and 53.3% omitted the sample size ([6]). Astoundingly, only 6 devices (1.6%) cited a randomized clinical trial, and only 3 devices (<1%) reported actual patient health outcomes ([6]). Most decision summaries focus on analytical performance (e.g. sensitivity/specificity) rather than clinical benefit ([6]). In short, while device counts have soared, the quality of reported evidence remains spotty.

Post-market data are similarly sparse. Lin et al. report that only 28.2% of device summaries mentioned any safety assessment, and 5.2% noted adverse events after clearance (mostly malfunctions) ([6]). Over a follow-up period, 40 devices (5.8% of all AI devices) were ever recalled (totaling 113 recall actions) – typically due to software bugs or failures ([7]). There was even one reported patient death associated with an AI device in this dataset ([7]). Thus the official tracker – while counting cleared products – highlights that rigorous benefit-risk evaluation and post-market surveillance are still developing areas. The FDA itself acknowledges this in guidance, encouraging “Good Machine Learning Practices” (GMLP) such as representative training data, thorough validation, and ongoing monitoring ([27]).

The authorization pathway statistics reflect these realities. The vast majority of AI/ML devices — about 97% — are 510(k) clearances ([4]) ([5]), taking advantage of predicates to avoid new trials. Only 2–3% were cleared via De Novo (novel Class II without predicate), and ~0.4% via PMA (high-risk trials) ([4]). However, 2025 data show review timelines are stabilizing: the median clearance time was 142 days, with 24% of submissions cleared in under 90 days ([1]). Notably, novel devices without predicates have started to appear but remain rare. (For example, Aidoc’s lung CT algorithm and the IDx-DR retinal system initially required De Novo authorization). In sum, the tracker data show a massive influx of AI tools into the FDA pipeline – but also underline that most rely on equivalence to prior devices and do not bring new clinical trial evidence.

Representative Case Studies

To illustrate how AI/ML enters the FDA framework, we highlight several notable devices from different domains:

  • IDx-DR (diabetic retinopathy screening): In April 2018, the FDA granted de novo authorization to IDx’s AI algorithm (IDx-DR) as the first autonomous AI to detect diabetic retinopathy from retinal images ([28]). Approval was based on a clinical study of ~900 patients; the software correctly identified retinopathy ~90% of the time, with low false rates ([29]). As IDx’s founder observed, “as the first of its kind…the system provides a roadmap for the safe and responsible use of AI in medicine” ([30]). This device exemplifies how companies can demonstrate real-world impact: by enabling primary-care offices to screen for eye disease, IDx-DR aims to extend specialist-level diagnosis to wider populations.

  • Caption Guidance (AI Ultrasound): Bay Labs (now Caption Health, acquired by GE HealthCare in 2023) built an AI assistant for cardiac ultrasound. Using deep learning to guide image capture, the Caption Guidance system was cleared via De Novo (DEN190040, Feb 2020 ([31])), after pivotal trials showing non-expert users could acquire cardiac images as effectively as sonographers. This illustrates AI being embedded in medical equipment, not just software: it augmented ultrasound hardware for cardiology and is now integrated into GE HealthCare's broader imaging portfolio.

  • Consumer Wearables (ECG Monitoring): The FDA has also approved AI-driven algorithms for consumer devices. For example, Apple’s Watch “AFib History” feature – which analyzes irregular pulse data to notify wearers of potential atrial fibrillation – received FDA clearance in 2022 alongside devices like AliveCor’s Kardia Mobile. These algorithms do not provide a diagnosis but alert users (and clinicians) to arrhythmias. The FDA list notes Apple’s AFib feature and Aidoc’s X-ray lung triage tool among 2022 clearances ([32]).

  • AI for Heart Sounds: Eko Devices received 510(k) clearance in January 2020 for its AI-enhanced stethoscope software (“Eko Analysis Software”) that analyzes heart sounds for murmurs and AFib ([33]). This represents AI applied at the bedside: Eko’s device collects audio data and uses ML to flag abnormal heart rhythms, extending this technology beyond imaging to auscultation.

  • MRI and CT Imaging (Radiology AI): Field leaders have introduced many such tools. For example, Philips’ “MRCAT brain” – an AI-based MRI tool for brain imaging – was FDA-cleared in Jan 2020 ([34]). Similarly, companies like GE, Canon, and Siemens routinely clear dozens of AI modules for CT/MRI machines (noise reduction, lesion detection, etc.) each year, as seen by their high device counts.

These case examples show the spectrum of AI medical tools “in the wild”. They range from specialized image-analysis apps to patient-facing monitors. Across cases, the common thread is demonstration of safety/effectiveness (usually against expert review) and meeting specific clinical needs (screening, triage, documentation).

  • Aidoc CARE1™ Foundation Model (Feb 2025): In a landmark development, Aidoc received FDA clearance in February 2025 for a rib fracture triage solution built on its CARE1™ Foundation Model — the first FDA clearance of a foundation model-powered clinical AI device ([35]). Unlike narrow single-task algorithms, foundation models can be adapted to multiple clinical use cases, potentially shrinking development timelines “from years to weeks.” Aidoc is already developing CARE2™ and has received FDA Breakthrough Device Designation for a multi-condition CT triage tool spanning numerous acute conditions ([36]).

  • RecovryAI (Generative AI Chatbot): In late 2025, the FDA granted Breakthrough Device Designation to RecovryAI, an LLM-powered chatbot for patients recovering from joint replacement surgery — a first for generative AI in the device space ([37]). While not yet cleared for marketing, this signals the FDA's willingness to evaluate patient-facing generative AI tools.

The emergence of foundation models in FDA-cleared devices marks a turning point. While most authorized AI devices still use narrow machine-learned algorithms trained on medical data, the FDA plans to tag devices that incorporate foundation-model technology (LLMs, generative systems) in future updates to the tracker ([38]) ([21]).

Discussion: Implications and Future Directions

The rapid growth of the FDA’s AI/ML tracker reflects both innovation opportunities and regulatory challenges. Transparency and Trust: By making the AI/ML device list public, the FDA signals its commitment to openness. Clinicians and patients can see when products use AI and find links to safety summaries ([13]). However, research shows that actual benefit-risk information is often poorly reported. For example, recent audits found very limited published evidence underpinning cleared devices ([6]). A significant implication is that the FDA (and developers) must improve standardized reporting. Calls have emerged for post-market surveillance systems akin to those used for drugs. Notably, Lin et al. conclude that “dedicated regulatory pathways and postmarket surveillance” are needed given current evidence gaps ([39]). The FDA is beginning to address this: new draft guidelines (2024–2025) emphasize performance metrics and continuous monitoring for adaptive AI ([40]) ([41]), while a recent FDA Health IT Strategy (2023) proposes using real-world performance data and registries for oversight.

Predetermined Change Control Plans (PCCPs): A major response is PCCPs, which the FDA finalized in December 2024 specifically for AI-enabled device software functions ([22]). Under this framework, manufacturers specify ahead-of-time how an AI algorithm will be updated and monitored, avoiding a full resubmission for each covered change. PCCPs must include three components: a description of anticipated modifications, a modification protocol with validation steps, and an impact assessment of how changes affect safety and performance. Adoption is growing: in 2025, 10% of all AI/ML device clearances included PCCPs ([1]). In August 2025, the FDA joined Health Canada and the UK’s MHRA to publish five guiding principles for PCCPs in ML-enabled devices, establishing an international framework ([24]). This shift signals a maturation in regulation — from static premarket review to life-cycle management of learning algorithms.

Global and Ethical Context: The FDA’s approach sits within a broader international environment. In the EU, the AI Act entered into force in August 2024 and classifies AI-enabled medical devices as “high-risk” under Article 6(1) and Annex I, demanding rigorous evidence and compliance. Most high-risk AI obligations take effect in August 2026, with full compliance for medical device AI required by August 2027 ([42]). In June 2025, the EU Medical Device Coordination Group published guidance (MDCG 2025-6) clarifying the interaction between the Medical Device Regulation and the AI Act ([43]). Meanwhile, the FDA’s January 2025 draft guidance on AI-enabled device software functions introduced new transparency and labeling requirements, recommending that manufacturers include a clear statement that the device uses AI, details on model inputs/outputs, performance measures, and known sources of bias. The guidance also encourages use of “model cards” — structured reports of AI model characteristics ([44]). Ethicists continue to raise concerns about AI autonomy and bias: Arnold (2021) argued that AI in healthcare poses “libertarian paternalism” issues, challenging traditional autonomy and requiring active physician involvement in the discourse ([45]). Issues like data privacy, algorithmic fairness, and explainability remain center-stage.

Market and Reimbursement: From an industry perspective, AI devices represent a large and growing market. Analysts project multi-billion-dollar value (one firm projects $13.7B in 2024 to $255B by 2033) ([46]). However, meaningful clinical adoption hinges on reimbursement. Progress is being made: the CPT 2026 code set includes 288 new codes covering digital health and AI services, and CMS has expanded payment policies for digital mental health treatment devices ([47]). Congress has also proposed a dedicated Medicare reimbursement pathway for AI diagnostic devices ([48]). CMS has acknowledged that current practice expense valuation does not adequately reflect the cost of SaaS and AI-driven technologies. Hospitals are also forming AI oversight committees, as experts recommend ([49]). To succeed, AI device makers must not only clear FDA but also demonstrate cost-effectiveness and integration with electronic health records (EHRs).

Foundation Models and Generative AI: The boundary between traditional narrow AI and generative AI in medical devices is now being actively tested. Aidoc's CARE1™ foundation model received FDA clearance in February 2025 — the first foundation-model-powered clinical AI to do so ([35]). Meanwhile, RecovryAI's LLM-powered surgical recovery chatbot received FDA Breakthrough Device Designation in late 2025, marking the first such designation for a generative AI medical device ([37]). The FDA itself acknowledges the challenge: as one STAT News report noted, "large language models' wide-ranging applications evade simple measures of safety and efficacy," challenging traditional device validation approaches. The FDA's tracker page explicitly mentions plans to "identify and tag medical devices that incorporate foundation models encompassing a wide range of AI systems, from large language models (LLMs) to multimodal architectures" ([38]). How the FDA will evaluate more autonomous generative AI devices — under existing medical device law or new AI-specific rules — remains to be seen, but early signals suggest the agency is engaging rather than waiting.

Continued Monitoring & Adaptation: In summary, the FDA AI/ML tracker shows a dynamic landscape. With 1,451 cumulative authorized devices through end-2025 ([2]), radiology imaging remains the dominant field (76% of all devices), but cardiovascular and neurology applications are growing steadily ([1]). The tracker also reveals a proliferation of devices without robust clinical evidence, underscoring the need for stronger governance. Going forward, the FDA is refining its approaches through finalized PCCP guidance, TPLC-based submission recommendations, international collaboration on AI principles, and new transparency/labeling requirements. The EU AI Act's high-risk obligations (effective August 2026–2027) will add another regulatory layer for companies marketing AI devices globally. Stakeholders should watch for how regulators tag emerging technologies (such as foundation-model and LLM-based devices) and adapt to the convergence of medical device regulation with broader AI governance frameworks.

Conclusion

The FDA AI/ML in Medical Devices Tracker encapsulates the state of AI-device innovation under U.S. regulation. It shows that AI is no longer a curiosity in medtech but a mainstream tool, with nearly 1,500 cleared devices now assisting diagnosis and workflow across specialties ([2]). The tracker’s transparency is valuable: by aggregating device names, clearance dates, and sponsors, it provides a data-rich resource for analysis. From this analysis, several lessons emerge. First, AI adoption is accelerating — 295 new authorizations in 2025 alone, with three of every four entries in imaging — yet the pace of clinical trial evidence has not kept up ([6]). Second, current regulation relies heavily on established pathways (510(k)), but is evolving towards life-cycle oversight (PCCPs, now included in 10% of 2025 clearances) to match AI’s adaptive nature ([1]). Third, ethical and safety experts warn that transparent reporting and robust monitoring are still lagging behind the hype ([39]) ([45]). Fourth, the emergence of foundation models (Aidoc CARE1™) and generative AI (RecovryAI) in the FDA pipeline marks a new chapter, as the agency develops frameworks to evaluate these more complex, adaptive systems.

As hospitals and patients increasingly encounter AI-powered tools, understanding the FDA’s AI/ML tracker is key. It not only lists approved products but also signals broader trends: where regulators are applying scrutiny and where gaps remain. Continued research – like the present report – must monitor this evolving landscape. Upcoming regulatory milestones — including finalization of the FDA’s TPLC guidance for AI devices, the EU AI Act’s high-risk obligations (August 2026), and the FDA’s planned tagging of foundation-model devices — will reshape the tracker and the compliance landscape. Stakeholders should use this resource as both a snapshot of what AI is already in use, and a guide pointing to where stronger evidence and governance will be required to ensure that these promising technologies truly benefit patient care.

References: Authoritative data and opinions in this report are drawn from FDA publications, peer-reviewed studies, industry analyses, and news sources. FDA guidance and device listings provide official definitions ([38]), while journals like JAMA Network Open, npj Digital Medicine, and Bioethics highlight evidence gaps and ethical issues ([6]) ([9]) ([45]). Industry analyses from The Imaging Wire, Innolitics, and MedTech Dive quantify trends ([2]) ([1]) ([3]). All claims above are supported by these sources.

External Sources (49)
Adrien Laurent

Need Expert Guidance on This Topic?

Let's discuss how IntuitionLabs can help you navigate the challenges covered in this article.

I'm Adrien Laurent, Founder & CEO of IntuitionLabs. With 25+ years of experience in enterprise software development, I specialize in creating custom AI solutions for the pharmaceutical and life science industries.

DISCLAIMER

The information contained in this document is provided for educational and informational purposes only. We make no representations or warranties of any kind, express or implied, about the completeness, accuracy, reliability, suitability, or availability of the information contained herein. Any reliance you place on such information is strictly at your own risk. In no event will IntuitionLabs.ai or its representatives be liable for any loss or damage including without limitation, indirect or consequential loss or damage, or any loss or damage whatsoever arising from the use of information presented in this document. This document may contain content generated with the assistance of artificial intelligence technologies. AI-generated content may contain errors, omissions, or inaccuracies. Readers are advised to independently verify any critical information before acting upon it. All product names, logos, brands, trademarks, and registered trademarks mentioned in this document are the property of their respective owners. All company, product, and service names used in this document are for identification purposes only. Use of these names, logos, trademarks, and brands does not imply endorsement by the respective trademark holders. IntuitionLabs.ai is an AI software development company specializing in helping life-science companies implement and leverage artificial intelligence solutions. Founded in 2023 by Adrien Laurent and based in San Jose, California. This document does not constitute professional or legal advice. For specific guidance related to your business needs, please consult with appropriate qualified professionals.

Related Articles

Need help with AI?

© 2026 IntuitionLabs. All rights reserved.