Back to ArticlesBy Adrien Laurent

FDA's AI Medical Device List: Stats, Trends & Regulation

Executive Summary

The FDA’s AI/ML-Enabled Medical Devices tracker (a public listing of FDA-authorized AI/ML-equipped devices) reflects a rapid, recent surge in such technologies under regulatory review. Only a handful of AI-based devices existed at the start of the 2010s, but approvals have escalated dramatically in the last five years: for example, the FDA cleared 6 AI/ML devices in 2015 versus 221 in 2023 ([1]). By mid-2025 the cumulative total exceeded roughly 1,200 AI/ML devices ([2]) (driven largely by approvals in imaging and signal analysis). The majority of these devices reside in Radiology (on the order of 75–80% of listings) ([3]) ([4]); cardiology accounts for about 10%, with neurology, hematology, and other specialties making up the rest. Nearly all cleared AI devices have entered via the 510(k) pathway ([5]) ([6]), reflecting reliance on substantial-equivalence rather than costly clinical trials.

Despite growth in device count, multiple analyses caution that evidence gaps remain. For example, a 2025 study found that <2% of FDA-cleared AI/ML devices were supported by randomized clinical trials and most 510(k) summaries lack details on study design, sample sizes, and demographics ([7]). Only about 5% of AI devices experienced any post-market adverse event report, and 5–6% were ever recalled (primarily for software bugs) ([8]) ([9]). These findings underscore the need for stronger life-cycle oversight (such as FDA’s new Predetermined Change Control Plans for algorithm updates) and improved transparency.

This report delves into the history, current state, and future implications of FDA’s AI/ML device tracker: reviewing the regulatory framework, analyzing authorization trends and statistics (with tables), examining exemplar devices, and discussing the challenges ahead — all supported by extensive references. We cover both optimistic industry/innovation perspectives and critical safety/ethical viewpoints. The FDA’s AI/ML devices list has become an essential resource for signal, helping manufacturers, clinicians, and patients understand how AI is entering medical practice while highlighting where oversight must continue to evolve.

Introduction and Background

Artificial intelligence (AI) – broadly, software systems that mimic human intelligence – has been applied in medicine for decades, but recent advances in machine learning (ML) and data availability have accelerated its adoption in clinical devices ([10]). The FDA explicitly encourages development of innovative AI-enabled medical devices, provided they remain “safe and effective” ([11]). To foster both innovation and transparency, in 2019–21 the FDA launched a curated AI-Enabled Medical Device list, publicly cataloguing authorized devices that incorporate AI/ML ([12]). This tracker is updated periodically with each new clearance. Its purpose is to help stakeholders (manufacturers, researchers, clinicians, and patients) see which products use AI, understand regulatory expectations, and ensure that approved algorithms have undergone safety and effectiveness review ([13]).

Historical context: The very first FDA-cleared AI system was PAPNET (a Pap-smear rescreening tool) in 1995 ([14]), but few followed until the 2010s. As late as 2015, only half a dozen FDA devices used AI ([1]). Rapid growth began after about 2016: from 2016–2023 AI/ML device authorizations grew at an ~49% annual rate ([15]), reflecting both technological maturity and regulatory focus. By end-2023 the FDA’s tracker listed over 690 devices ([14]) ([4]), and projections suggested ~950 by mid-2024 ([16]). MedTech Dive confirmed this trend, reporting 221 new AI devices in 2023 alone (up from 91 in 2022) ([17]) ([1]). Thus, AI/ML is no longer niche: these algorithms now assist in diagnostic imaging, ECG interpretation, laboratory analysis, and more.

Definition of AI/ML devices: The FDA uses a broad definition: devices on the list must use AI or ML “for one or more functions integral to clinical care,” as standalone software or embedded in hardware ([18]). In practice, identification relies on keywords. The FDA notes that the list is not comprehensive – rather, it includes products whose FDA summary descriptions (or classification codes) contain AI-related terms ([19]). For each device on the list, the tracker provides a link to the FDA’s database entry, which includes releasable information like safety/effectiveness summaries ([13]). (These summaries, however, often omit detailed study data ([7]).) The list encompasses both early “Guidance” apps (flagging images or data for clinician review) and more autonomous AI tools. To promote future transparency, the FDA has signaled plans to tag devices built on modern “foundation models” (e.g. large language models) once they appear, so users will know if e.g. an image-reader uses LLM-driven components ([20]) ([21]).

Regulatory oversight: AI/ML devices generally fall under the Software as a Medical Device (SaMD) paradigm.Most have been Class II devices cleared via the 510(k) pathway ([5]) ([6]), meaning the manufacturer demonstrated “substantial equivalence” to a predicate (pre-existing) device. Unlike PMA (premarket approval), 510(k) does not require new clinical trials, which has allowed faster entry of digital tools. The FDA’s Digital Health Center of Excellence (established 2017) has helped build expertise in evaluating these submissions. In 2022–2023 the FDA issued new guidance specific to AI/ML devices – for example, a Final Guidance on Predetermined Change Control Plans (PCCPs) (Dec 2022) allows pre-authorized algorithm update protocols ([22]). In early 2023 the FDA released a draft guidance on submission content for AI devices, encouraging manufacturers to include robust performance data and consider PCCPs ([23]). These steps reflect the FDA’s effort to balance faster innovation with patient protection, recognizing that AI tools may evolve over their lifecycle.

Regulatory Framework and the FDA AI/ML Devices Tracker

The FDA regulates AI/ML software under its existing medical device framework. In practice, many AI algorithms are deemed Class II (moderate-risk) and cleared via 510(k): indeed, by late 2023 roughly 97% of cleared AI/ML devices used 510(k) (vs. 2–3% via De Novo or PMA) ([5]). This mirrors general trends in medical software. The FDA’s stated requirement is simply that the device “meets applicable premarket requirements,” including a focused safety/effectiveness review ([13]). As an example, the FDA publicly notes that listed devices have passed a “focused review of the device’s overall safety and effectiveness, including evaluation of study appropriateness for the device’s intended use” ([13]). (However, FDA decision summaries often lack patient-outcome data ([7]).)

Influx of submissions has been brisk. Industry surveys and FDA reports indicate record volumes: e.g. 91 new clearances in 2022 ([17]) and 221 in 2023 ([1]). As of July 2025, the FDA’s Device Center had logged over 1,200 AI/ML devices since 1995 ([2]). These products span many categories, from “AI-assisted” imaging adjuncts to automated analyze-and-report systems. For instance, Radiology has traditionally dominated: over 75% of FDA AI approvals address image analysis (X-ray, CT, MRI) ([3]) ([4]). Cardiology (arrhythmia/ECG analysis) and neurology follow behind. The tracker’s contents confirm this: as of October 2023, 531 of 692 devices (77%) on the FDA’s list were radiology tools ([3]) ([4]) (Table 1). By specialty, authorized devices (2023 data) include roughly 71 in cardiology (10%), 20 in neurology (3%), 15 in hematology (2%), and the rest spread thinly across other fields.

SpecialtyAuthorized AI/ML Devices (2023)
Radiology531 (77%) ([4])
Cardiology71 (10%) ([24])
Neurology20 (3%) ([25])
Hematology15 (2%) ([25])
Other (misc.)*55 (8%) – various (incl. pathology, ophthalmology)

*Table 1: Distribution of FDA-cleared AI/ML medical devices by medical specialty (data from FDA lists) ([4]). “Other” covers all remaining specialties.

This skew toward imaging reflects data availability and early digital adoption in radiology ([26]). In contrast, fields like ophthalmology or dentistry have very few AI devices listed. Among companies, the largest medtech firms lead adoption: e.g. GE Healthcare had 58 AI devices authorized in 2023 (up from 42 in 2022), Siemens 40 (vs. 29), Canon 22 (vs. 17), and Philips 20 (vs. 10) ([27]). This large-company presence is consistent with established imaging portfolios. Table 2 summarizes top companies and year-over-year growth:

CompanyDevices (2022)Devices (2023)Change
GE Healthcare42 ([27])58 ([27])+16
Siemens Health.29 ([28])40 ([28])+11
Canon17 ([29])22 ([29])+5
Philips Health.10 ([30])20 ([30])+10
Aidoc (startup)13 ([31])19 ([31])+6
United Imaging6 ([32])12 ([32])+6
Viz.ai (startup)6 ([32])9 ([32])+3

Table 2: Selected companies and number of FDA-cleared AI/ML devices in 2022 vs. 2023 (source: Kinahan/Univ. Washington) ([27]).

Data Trends and Analysis

The raw data behind the FDA tracker reveal clear trends. Figure 1 (not shown) would illustrate that FDA approvals of AI devices have roughly doubled or more nearly every 2–3 years since 2016 ([15]) ([1]). Indeed, from only 6 devices in 2015, the annual count grew to 91 in 2022 ([17]) and a projected ~1100 total by end-2024 ([15]). By May 2025, MedTech Dive reported roughly 950 total AI/ML clearances ([16]), and by July 2025 over 1,200 ([2]). The FDA itself updates the list roughly quarterly.

Almost all FDA-cleared AI devices are decision-support tools rather than fully autonomous diagnosticians. They typically flag images or data for clinician review. Consequently, safety signals in the tracker are subtle. Published analyses indicate that pre-market evidence is often limited: a 2025 JAMA study of 691 devices (through July 2023) found that 46.7% of FDA summaries did not even describe the study design, and 53.3% omitted the sample size ([7]). Astoundingly, only 6 devices (1.6%) cited a randomized clinical trial, and only 3 devices (<1%) reported actual patient health outcomes ([7]). Most decision summaries focus on analytical performance (e.g. sensitivity/specificity) rather than clinical benefit ([7]). In short, while device counts have soared, the quality of reported evidence remains spotty.

Post-market data are similarly sparse. Lin et al. report that only 28.2% of device summaries mentioned any safety assessment, and 5.2% noted adverse events after clearance (mostly malfunctions) ([7]). Over a follow-up period, 40 devices (5.8% of all AI devices) were ever recalled (totaling 113 recall actions) – typically due to software bugs or failures ([8]). There was even one reported patient death associated with an AI device in this dataset ([8]). Thus the official tracker – while counting cleared products – highlights that rigorous benefit-risk evaluation and post-market surveillance are still developing areas. The FDA itself acknowledges this in guidance, encouraging “Good Machine Learning Practices” (GMLP) such as representative training data, thorough validation, and ongoing monitoring ([33]).

The authorization pathway statistics reflect these realities. As of late 2023, about 96–97% of AI/ML devices were 510(k) clearances ([5]) ([6]), taking advantage of predicates to avoid new trials. Only 2–3% were cleared via De Novo (novel Class II without predicate), and ~0.4% via PMA (high-risk trials) ([5]). Notably, novel devices without predicates have started to appear but remain rare. (For example, Aidoc’s lung CT algorithm and the IDx-DR retinal system initially required De Novo authorization). In sum, the tracker data show a massive influx of AI tools into the FDA pipeline – but also underline that most rely on equivalence to prior devices and do not bring new clinical trial evidence.

Representative Case Studies

To illustrate how AI/ML enters the FDA framework, we highlight several notable devices from different domains:

  • IDx-DR (diabetic retinopathy screening): In April 2018, the FDA granted de novo authorization to IDx’s AI algorithm (IDx-DR) as the first autonomous AI to detect diabetic retinopathy from retinal images ([34]). Approval was based on a clinical study of ~900 patients; the software correctly identified retinopathy ~90% of the time, with low false rates ([35]). As IDx’s founder observed, “as the first of its kind…the system provides a roadmap for the safe and responsible use of AI in medicine” ([36]). This device exemplifies how companies can demonstrate real-world impact: by enabling primary-care offices to screen for eye disease, IDx-DR aims to extend specialist-level diagnosis to wider populations.

  • Caption Guidance (AI Ultrasound): Bay Labs (now Caption Health) built an AI assistant for cardiac ultrasound. Using deep learning to guide image capture, the Caption Guidance system was cleared via De Novo (DEN190040, Feb 2020 ([37])), after pivotal trials showing non-expert users could acquire cardiac images as effectively as sonographers. This illustrates AI being embedded in medical equipment, not just software: it augmented ultrasound hardware for cardiology.

  • Consumer Wearables (ECG Monitoring): The FDA has also approved AI-driven algorithms for consumer devices. For example, Apple’s Watch “AFib History” feature – which analyzes irregular pulse data to notify wearers of potential atrial fibrillation – received FDA clearance in 2022 alongside devices like AliveCor’s Kardia Mobile. These algorithms do not provide a diagnosis but alert users (and clinicians) to arrhythmias. The FDA list notes Apple’s AFib feature and Aidoc’s X-ray lung triage tool among 2022 clearances ([17]).

  • AI for Heart Sounds: Eko Devices received 510(k) clearance in January 2020 for its AI-enhanced stethoscope software (“Eko Analysis Software”) that analyzes heart sounds for murmurs and AFib ([38]). This represents AI applied at the bedside: Eko’s device collects audio data and uses ML to flag abnormal heart rhythms, extending this technology beyond imaging to auscultation.

  • MRI and CT Imaging (Radiology AI): Field leaders have introduced many such tools. For example, Philips’ “MRCAT brain” – an AI-based MRI tool for brain imaging – was FDA-cleared in Jan 2020 ([39]). Similarly, companies like GE, Canon, and Siemens routinely clear dozens of AI modules for CT/MRI machines (noise reduction, lesion detection, etc.) each year, as seen by their high device counts.

These case examples show the spectrum of AI medical tools “in the wild”. They range from specialized image-analysis apps to patient-facing monitors. Across cases, the common thread is demonstration of safety/effectiveness (usually against expert review) and meeting specific clinical needs (screening, triage, documentation). Notably, none of the FDA-cleared devices to date rely on modern generative AI or LLMs ([40]). Current AI devices use narrow machine-learned algorithms trained on medical data. The FDA and industry are watching developments in large models: the agency plans to label any future devices that embed foundation-model technology ([21]), but as of now that frontier remains untapped in clinical devices.

Discussion: Implications and Future Directions

The rapid growth of the FDA’s AI/ML tracker reflects both innovation opportunities and regulatory challenges. Transparency and Trust: By making the AI/ML device list public, the FDA signals its commitment to openness. Clinicians and patients can see when products use AI and find links to safety summaries ([13]). However, research shows that actual benefit-risk information is often poorly reported. For example, recent audits found very limited published evidence underpinning cleared devices ([7]). A significant implication is that the FDA (and developers) must improve standardized reporting. Calls have emerged for post-market surveillance systems akin to those used for drugs. Notably, Lin et al. conclude that “dedicated regulatory pathways and postmarket surveillance” are needed given current evidence gaps ([41]). The FDA is beginning to address this: new draft guidelines (2024–2025) emphasize performance metrics and continuous monitoring for adaptive AI ([22]) ([23]), while a recent FDA Health IT Strategy (2023) proposes using real-world performance data and registries for oversight.

Predetermined Change Control Plans (PCCPs): A major response is PCCPs, which the FDA introduced to manage AI’s iterative nature ([22]). Under this framework, manufacturers specify ahead-of-time how an AI algorithm will be updated and monitored, potentially avoiding a full resubmission. PCCP pilot submissions are already underway. This shift signals a maturation in regulation—from static premarket review to life-cycle management of learning algorithms.

Global and Ethical Context: The FDA’s approach sits within a broader international environment. In the EU, the forthcoming AI Act (and Medical Device Regulation) will classify many AI-based tools as “high-risk,” demanding rigorous evidence and compliance. In the US, policy priorities (e.g. President Biden’s AI executive order, later rescinded by new administrations) highlight tensions between innovation and safety. Ethicists note concerns about AI autonomy and bias: e.g. Arnold (2021) argued that AI in healthcare poses “libertarian paternalism” issues, challenging traditional autonomy and requiring active physician involvement in the discourse ([42]). Indeed, issues like data privacy, algorithmic fairness, and explainability are now center-stage. Future device tracking may need to include metadata (e.g. demographic performance) to ensure equity.

Market and Reimbursement: From an industry perspective, AI devices represent a large and growing market. Analysts project multi-billion-dollar value (one firm projects $13.7B in 2024 to $255B by 2033) ([43]). However, meaningful clinical adoption hinges on reimbursement. Currently, there are few specific insurance payment codes for AI diagnostics. Congress and CMS are exploring new payment pathways for algorithmic care. Hospitals are also forming AI oversight committees, as experts recommend ([44]). To succeed, AI device makers must not only clear FDA but also demonstrate cost-effectiveness and integration with electronic health records (EHRs).

Case Study – Generative AI Horizon: A parallel future concern is generative AI. ChatGPT-style tools can interpret and generate medical text or imagery. While none are FDA-cleared for clinical use yet, they loom on the horizon. The FDA recognizes this: its tracker page explicitly mentions plans to flag devices using foundation models (LLMs, generative systems) ([20]). We anticipate that future updates may show entries for devices that incorporate generative algorithms (e.g. language-based diagnostic assistants). Such devices would straddle the line between medical device and general AI. How the FDA will evaluate these – under medical device law or new AI-specific rules – remains to be seen.

Continued Monitoring & Adaptation: In summary, the FDA AI/ML tracker shows a dynamic landscape. Its contents (over 800–1200 devices, depending on source ([2]) ([45])) highlight that radiology imaging is currently the “#1 field” for AI devices, but recent data suggest growth in cardiology, neurology, and others ([46]) ([47]). The tracker also reveals a proliferation of devices without robust clinical evidence, underscoring the need for stronger governance. Going forward, the FDA is likely to refine its approaches (via guidance on evidence and monitoring, PCCPs, and new performance-based requirements) while continuing to update the list. Stakeholders should watch for how regulators tag emerging technologies (such as LLM-based devices) and adapt to shifting policy (e.g. updated software guidance).

Conclusion

The FDA AI/ML in Medical Devices Tracker encapsulates the state of AI-device innovation under U.S. regulation. It shows that AI is no longer a curiosity in medtech but a mainstream tool, with hundreds of cleared devices now assisting diagnosis and workflow across specialties ([4]) ([1]). The tracker’s transparency is valuable: by aggregating device names, clearance dates, and sponsors, it provides a data-rich resource for analysis. From this analysis, several lessons emerge. First, AI adoption is accelerating – three of every four new entries are in imaging – yet the pace of clinical trial evidence has not kept up ([7]). Second, current regulation relies heavily on established pathways (510(k)), but is evolving towards life-cycle oversight (PCCPs) to match AI’s adaptive nature ([22]) ([23]). Third, ethical and safety experts warn that transparent reporting and robust monitoring are still lagging behind the hype ([41]) ([42]).

As hospitals and patients increasingly encounter AI-powered tools, understanding the FDA’s AI/ML tracker is key. It not only lists approved products but also signals broader trends: where regulators are applying scrutiny and where gaps remain. Continued research – like the present report – must monitor this evolving landscape. Future regulatory updates (e.g. special notifications for LLM-based devices, stronger performance guidelines) will be reflected in the tracker. Stakeholders should use this resource as both a snapshot of what AI is already in use, and a guide pointing to where stronger evidence and governance will be required to ensure that these promising technologies truly benefit patient care.

References: Authoritative data and opinions in this report are drawn from FDA publications, peer-reviewed studies, industry analyses, and news sources. For example, FDA guidance and device listings provide official definitions ([11]) ([13]), while journals like JAMA Network and Bioethics highlight evidence gaps and ethical issues ([7]) ([42]). Industry analyses and MedTech journalism help quantify trends ([15]) ([4]). All claims above are supported by these sources.

External Sources

DISCLAIMER

The information contained in this document is provided for educational and informational purposes only. We make no representations or warranties of any kind, express or implied, about the completeness, accuracy, reliability, suitability, or availability of the information contained herein. Any reliance you place on such information is strictly at your own risk. In no event will IntuitionLabs.ai or its representatives be liable for any loss or damage including without limitation, indirect or consequential loss or damage, or any loss or damage whatsoever arising from the use of information presented in this document. This document may contain content generated with the assistance of artificial intelligence technologies. AI-generated content may contain errors, omissions, or inaccuracies. Readers are advised to independently verify any critical information before acting upon it. All product names, logos, brands, trademarks, and registered trademarks mentioned in this document are the property of their respective owners. All company, product, and service names used in this document are for identification purposes only. Use of these names, logos, trademarks, and brands does not imply endorsement by the respective trademark holders. IntuitionLabs.ai is an AI software development company specializing in helping life-science companies implement and leverage artificial intelligence solutions. Founded in 2023 by Adrien Laurent and based in San Jose, California. This document does not constitute professional or legal advice. For specific guidance related to your business needs, please consult with appropriate qualified professionals.

Related Articles