IntuitionLabs
Back to Articles

FDA First AI cGMP Warning Letter: Manufacturing Lessons

Executive Summary

On April 2, 2026, the U.S. Food and Drug Administration (FDA) issued its first warning letter explicitly identifying misuse of artificial intelligence (AI) as a current Good Manufacturing Practice (cGMP) enforcement issue ([1]) ([2]). In the Purolea Cosmetics Lab warning letter (FDA 320-26-58), investigators found that the firm had used AI “agents” to draft drug specifications, procedures, and master batch records but failed to have qualified personnel review or approve those outputs ([2]). When confronted about fundamental cGMP violations – notably distributing products without required process validation (21 CFR §211.100) – the company’s personnel responded that “the AI agent … never told us [the validation] was required” ([2]). The FDA unequivocally rejected this defense. The warning letter reaffirms that existing law governs AI just as any other tool: all quality-affecting documents and decisions must be reviewed and approved by competent humans in the Quality Unit (21 CFR §211.22), and statutory requirements (like process validation) cannot be bypassed by blaming an AI system ([2]) ([3]).

This enforcement action was built on years of groundwork. FDA’s Center for Drug Evaluation and Research (CDER) had already signaled its focus on AI in manufacturing through discussion papers (March 2023) and workshops, and through draft guidances (January 2025 on AI model credibility) ([4]) ([5]). Parallel actions – notably a February 2025 FDA warning letter to Exer Labs (an AI-based medical device) – also emphasized that AI increases the Agency’s scrutiny of quality systems rather than providing regulatory cover ([6]) ([7]). In short, the FDA’s message is clear: AI is a tool, not a substitute for the quality system. If a firm uses AI in any good manufacturing process, every AI-generated output must enter the quality system only after human evaluation and sign-off ([2]) ([3]).

The lessons for the industry are profound. The Purolea case illustrates that delegating QA or regulatory knowledge to a black-box algorithm is unacceptable. Pharma and biotech companies – including all contract manufacturers, testing labs, and packagers – must immediately identify (“inventory”) all points where AI tools touch GxP processes ([8]). They must implement strict human-in-the-loop controls: every AI-derived protocol, specification, or log must be checked and approved by qualified personnel before use ([2]) ([3]). Quality agreements and supplier audits must now explicitly address AI usage (permitting uses, requiring disclosure, verifying compliance) ([9]) ([10]). Training programs must highlight AI’s pitfalls (outdated data, hallucinations, missing knowledge) so that staff do not assume these tools replace cGMP expertise ([11]). Companies should build auditable trails of AI outputs and human reviews, update standard operating procedures to incorporate AI governance, and include AI checks in all quality and management reviews ([8]) ([12]).

In summary, the “first AI cGMP warning letter” firmly establishes that fundamental enforcement standards have not changed – only the Agency’s lens has been expanded. This report provides a thorough analysis of that warning letter and its implications. We begin by reviewing the regulatory and technological background. We then dissect the Purolea enforcement action in detail, mapping each FDA finding to the underlying requirements (Table 1). We compare with related cases (e.g. Exer Labs) and summarize expert commentary ([13]) ([6]). We present data and case studies on AI’s potential and risks in manufacturing, and discuss best practices and future directions. We conclude that AI can indeed benefit pharma manufacturing—but only under a robust quality system – and that the Purolea letter serves as a clear blueprint for FDA’s expectations going forward ([2]) ([14]).

Introduction and Background

Pharmaceutical manufacturing in the United States is governed by current Good Manufacturing Practice (cGMP) regulations (21 CFR Parts 210 and 211), which establish minimum standards to ensure that drug products are consistently produced and controlled for safety, identity, strength, quality, and purity ([15]). Under the Federal Food, Drug, and Cosmetic Act, failure to follow these rules renders drug products adulterated ([15]). Central to cGMP is the requirement that every quality-related activity be under the oversight of a qualified Quality Unit (QU).For example, 21 CFR §211.22(c) mandates that “the quality control unit shall have the responsibility and authority to approve or reject … all procedures or specifications … affecting the identity, strength, quality, or purity of a drug product.” Similarly, §211.100 requires written production and process control procedures, including validation protocols, to ensure that each batch meets its quality attributes and performance claims. In short, cGMP imposes a strict human-centric control framework: all manufacturing processes and records must be defined, implemented, and reviewed by authorized personnel in accordance with documentation and validation requirements.

Over the decades, the pharmaceutical industry has increasingly automated its operations (from paper records to computerized systems to advanced sensors) in line with initiatives like Process Analytical Technology (PAT) and Quality by Design (QbD). These programs encourage real-time monitoring, multivariate analytics, and risk-based control strategies. As a recent review notes, “a thorough understanding of GMP requirements combined with practical predictive performance are the foundations for successful [AI/ML] implementation” in regulated production ([16]). In practice, AI and machine learning have begun to find roles in manufacturing. Common pilot projects include advanced process control, soft sensors (e.g. using spectroscopy plus AI to estimate content in real time), anomaly detection in equipment, and even automated visual inspection of packaging ([17]) ([18]). AI is also used off-line for tasks such as yield optimization, formulation development, or quality risk analysis. Industry surveys and case studies show growing engagement: at FDA’s September 2023 workshop, about one-third of attendees reported actively using AI for process monitoring or quality analysis, and many planned to submit regulatory applications involving AI-composed products or processes ([19]). Moreover, consultancy reports emphasize the promise of AI: for example, the McKinsey Global Institute estimated that generative AI could unlock $60–110 billion per year in value for pharma and medical products through faster discovery, development, and operations ([20]).

These opportunities have prompted regulators to act. In March 2023, FDA’s CDER published a Discussion Paper on AI in Drug Manufacturing and invited feedback from industry ([4]). Later that year, FDA and industry groups hosted workshops on AI in pharmaceutical production. By early 2025, FDA had issued draft guidance on AI model credibility (for drug submissions) ([5]) and even collaborated with the European Medicines Agency to publish “Guiding Principles of Good AI Practice in Drug Development” (January 2026) ([21]). Similarly, EMA in July–December 2023 sought comments on a Reflection Paper on AI in the Medicinal Product Lifecycle ([22]). Professional bodies have also responded: in July 2025 the International Society for Pharmaceutical Engineering (ISPE) issued a new GAMP® Guide on Artificial Intelligence, providing a holistic framework for AI in GxP-regulated environments and emphasizing patient safety, product quality, and data integrity ([23]) ([24]).

Despite these policy and guidance efforts, FDA had not until 2026 publicly penalized any firm explicitly for AI misuse in manufacturing. The April 2026 warning letter to Purolea Cosmetics Lab is therefore precedential: it is the first time the agency carved out an enforcement deficiency section specifically on “Inappropriate Use of Artificial Intelligence” ([1]) ([25]). This report examines that letter in depth, placing it in the context of FDA’s evolving AI oversight and its potential impact on pharmaceutical quality systems. We will analyze the legal issues, the case facts, and the broader lessons for manufacturers, contract organizations, and regulators alike.

Regulatory and AI Guidance Landscape

Current cGMP Requirements

FDA’s cGMP regulations for finished pharmaceuticals (21 CFR 210/211) are technology-neutral: they do not specifically mention AI or machine learning. Instead, they articulate general principles – e.g. that equipment must operate within established limits, procedures must ensure product identity/strength/purity, and personnel must maintain complete and accurate records. For example, 21 CFR §211.22(d) requires written production and process control procedures; §211.22(a) requires that batch production records be reviewed by the Quality Unit. 21 CFR §211.180–211.188 govern records and label controls, insisting on legibility, contemporaneous entries, and a clear audit trail for any changes. The overarching theme is summed up in FDA’s longstanding guidance: “‘QUALITY SYSTEMS’ should be applied to every element of the manufacturing process” ([26]).

In the context of computerized or automated processes, FDA has long emphasized validation and control. For example, 21 CFR §211.68 requires that equipment used in manufacturing be routinely calibrated and inspected. Software used for production or testing must either be qualified under 21 CFR Part 11 requirements (electronic records and signatures) or validated as part of the quality system. Change control, training, and risk management elements (paralleling ICH Q9 and Q10) are woven throughout the laws and guidance ([27]). Importantly, regulations make no carve-outs for AI: if an AI-derived model or document is part of manufacturing, it is subject to the same validation, documentation, and review requirements as any other system.

FDA’s AI-Specific Initiatives

Recognizing the transformative potential of AI, FDA has taken a phased approach: first educating stakeholders, then issuing guidance, and now beginning enforcement. Key milestones include:

  • March 2023: FDA/CDER released the Discussion Paper: Artificial Intelligence in Drug Manufacturing ([4]). This paper outlined possible AI/machine-learning applications across manufacturing (from predictive maintenance to digital twins) and solicited comments on data governance, model validation, lifecycle management, and integration into pharmaceutical quality systems ([4]) ([28]). This signaled that FDA was studying AI issues but had not yet set strict rules.

  • September 2023: FDA and the Product Quality Research Institute hosted a public workshop on “The Regulatory Framework for the Utilization of AI in Pharmaceutical Manufacturing” (Sept 26–27) ([4]). Attendees included industry, regulators, and technologists. Surveys showed that ~30% of participants were already using AI in some capacity, and nearly half planned to engage FDA on future AI manufacturing plans ([19]).

  • January 6, 2025: The Commissioner’s Office issued a press release on draft guidances (CDER’s AI Model Credibility and device-oriented AI guidances) ([5]). The proposed “Considerations for the Use of AI to Support Regulatory Decision-Making” featured a risk-based framework for model validation and encouraged industry to engage early. It underscored FDA’s expectation that AI models used to generate regulated information must be shown credible and well-controlled ([3]).

  • February 10, 2025: FDA’s Center for Devices and Radiological Health (CDRH) issued a warning letter to Exer Labs, Inc. (the “Exer Scan” AI device) ([29]) ([6]). This letter is noteworthy because it was one of the first FDA enforcement actions involving an AI-enabled product. While its primary focus was device regulatory scope (FDA found the Exer Scan was marketed beyond the limits of its 510(k) exemption), the letter also enumerated major quality system deficiencies – lack of design control procedures, no CAPA system, no complaint-handling SOPs, no auditing processes, etc. ([30]) ([31]). In other words, even for a medical device using AI, FDA emphasized that general quality system requirements still apply fully. Legal analyses of Exer note that “where classification is incorrect and quality compliance infrastructure is not adequately developed,” FDA will cite GMP violations ([6]).

  • July 2025: The International Society for Pharmaceutical Engineering announced its GAMP® Guide: Artificial Intelligence ([23]). This industry-led guidance provides a comprehensive framework for AI in GxP-regulated areas, stressing that innovation must be balanced with patient safety and data integrity ([23]) ([24]). It advocates “data as the backbone of AI systems” and alignment of stakeholders on best practices ([23]). While not a regulator, ISPE’s guide shows the industry’s recognition of the need to codify AI controls.

  • September 2025: Europe’s EMA published a Reflection Paper on AI in the Medicinal Product Lifecycle (published Sept 30, 2024) ([32]), following a public consultation earlier in 2023. The paper articulates principles applicable to AI at all stages (R&D through post-market) and seeks international harmonization.

By January 2026, FDA and EMA jointly released ten “Guiding Principles of Good AI Practice in Drug Development” for industry ([21]). These principles are high-level (human-centric design, risk-based approach, data governance, etc.), aligning with earlier notions in ICH Q9/Q10 ([21]) ([27]). The arrival of these principles signaled FDA’s readiness to move from advice to action. Indeed, the Purolea warning letter came less than three months later, on April 2, 2026 ([2]).

Table 1 summarizes these and other key milestones in FDA’s engagement with AI in pharma manufacturing. It shows how FDA’s stance has progressed from exploratory discussion (2022–2024) to concrete enforcement (2025–2026). Throughout, the theme is clear: existing regulations apply fully to AI use, and quality systems must be equipped to govern new technologies.

DateActionAuthority / TopicKey Points / Citations
Mar 2021NIST publishes AI Risk Management Framework (RMF)(Tech sector; U.S. DOC)Voluntary framework for trustworthy AI (released Sept 2023). Emphasizes governance, documentation and accountability.
Mar 1, 2023CDER discussion paper: “AI in Drug Manufacturing”FDA CDER (public comments)Outlined AI use-cases and challenges; invited industry feedback on data, governance, model validation, life-cycle, etc. ([4]) ([28]).
Sept 26–27, 2023FDA-PQRI Workshop on AI in ManufacturingFDA & PQRI (stakeholder workshop)Industry/regulators discussed AI applications; survey showed many firms exploring AI in QC and process monitoring ([19]).
July–Dec 2023EMA draft Reflection Paper on AI in LifecycleEuropean Medicines Agency (public consultation)Proposed principles for AI/ML in human/veterinary medicinal products. (Workshop Nov 2023) ([22]).
Jan 6, 2025FDA draft guidance: AI model credibility (drug products)FDA Press Release (CDER draft guidance)Risk-based framework for AI in regulatory submissions; emphasized need for model validation and context-specific credibility ([5]).
Feb 10, 2025Warning Letter: Exer Labs, Inc. (AI medical device)FDA CDRH (model misbranding and QS violations)Found device misclassified beyond 510(k) exemption; cited extensive quality system failures (design control, CAPA, training, etc.) ([30]) ([31]).
Apr 2, 2026Warning Letter: Purolea Cosmetics Lab (AI in cGMP)FDA CDER (first AI-specific cGMP enforcement)Cited misuse of AI to draft GMP documents without review (21 CFR 211.22(c)); distribution without process validation (21 CFR 211.100) ([2]).
July 29, 2025ISPE GAMP® Guide: Artificial IntelligenceIndustry (ISPE publication)Framework for AI in GxP: “holistic,” patient-safety centric; aligns AI with data integrity and existing GAMP/QMS practices ([23]) ([24]).
Jan 2026Guiding Principles of Good AI Practice (joint FDA/EMA)FDA/EMA (white paper)Ten principles for AI in drug development (human-in-loop, risk-based, data governance, etc.) ([21]).

Table 1: Timeline of key AI-related regulatory guidance and enforcement (2023–2026). The progression from discussion papers to formal guidances to warning letters demonstrates FDA’s evolution from education to enforcement regarding AI in manufacturing.

AI Applications in Pharma Manufacturing

Given this backdrop, it is instructive to understand how AI is actually being applied in pharmaceutical manufacturing and quality systems. AI (including machine learning and generative models) can serve multiple roles:

  • Predictive analytics and maintenance: AI can analyze historical equipment data (via IIoT sensors) to predict failures. For example, companies report that AI-based predictive maintenance has reduced production downtime (and product loss) substantially ([33]). In one case study, integrating AI-driven monitoring of pumps and HVAC yielded a 25–30% reduction in unplanned downtime in a multi-facility pharma plant ([33]). Such systems must still be integrated with the Quality System: sensor inputs must be validated for accuracy, alarms and maintenance actions must be documented, and any model-derived recommendations become part of the production system under 21 CFR.

  • Process monitoring and control: Continuous manufacturing lines often use Process Analytical Technology (PAT) to adjust processes in real time. AI/ML can augment PAT by identifying subtle patterns (e.g. a drift in raw material quality or equipment behavior) that trigger alerts or adjustments. Stakeholders envision AI monitoring of complex biotech processes (e.g. cell culture bioreactors) to ensure they stay within design space ([18]). In closed-loop control (advanced process control), an AI module might recommend new set-points mid-batch. Here, cGMP demands that any such system be thoroughly qualified: the model must be validated at its domain of use (ICH Q2), changes controlled (21 CFR 211.100), and real-time data auditable (21 CFR 211.68(d), Part 11 if computerized). Any instance where an AI model “interprets data” must still result in human-reviewed actions or overrides, unless the system itself is validated as a computerized control. For instance, one review notes that FDA has emphasized the need to translate AI model lifecycles into controlled lifecycles consistent with validation, data integrity, and change control ([27]).

  • Quality control and testing: In analytical labs, machine learning can expedite data interpretation (peak integration in chromatography, pattern recognition in spectra). AI may assist in method development by predicting formulation stability or assay limits. However, cGMP requires that any analytical method (AI-generated or not) be validated for specificity, accuracy, precision. If AI tools are used to analyze results (e.g. flag out-of-spec data), the algorithm’s logic must be qualified and its outputs documented. Likewise, if generative AI drafts laboratory SOPs or deviation narratives, the quality unit must carefully vet their content (see Section 4).

  • Document drafting and data management: Perhaps the most direct analogy to Purolea’s scenario is generative AI for documentation. AI tools like large language models (LLMs) can rapidly produce text: draft SOPs, form templates, batch record entries, or even query regulatory databases. Users might also employ AI chatbots to answer compliance questions. While these tools can boost efficiency, they carry risks: “hallucinations” or outdated knowledge can introduce errors. FDA’s Purolea letter shows that without rigorous human oversight, using AI-generated documents can violate 21 CFR §211.22(c) (since the QU did not approve them) ([2]).

  • Supply chain and inventory: Some companies use AI to forecast demand or optimize inventory of raw materials and packaging. These decisions indirectly affect GMP (since raw material shortages can tempt firms to use non-qualified materials). Any AI-driven supply tool should integrate with supplier qualification procedures and be validated for accuracy (per 21 CFR 211.84 on components, for example).

In all these cases, the regulatory principle remains unchanged: the quality system owner is responsible for ensuring that data, models, and software outputs comply with GMP. If an AI model assists a function that would otherwise be done by humans, it must fit into the firm’s established procedures and validation regime. As one industry review puts it:

“Implementing AI in GMP environments requires translating model development practices into a controlled lifecycle that is compatible with validation, data integrity, and change control” ([27]).

Any AI tool used “for GxP” must meet a level of validation similar to traditional software and statistical tools. The FDA’s draft AI guidance notes that appropriate AI validation documentation should be integrated into quality system records and SOPs ([3]). In short, AI is not exempt from computer system and validation requirements; it merely introduces new technology into them.

Quality Agreements and Outsourced Manufacturing

A special concern arises in contract manufacturing and testing: FDA regards contractors as extensions of the original manufacturer ([34]). Under Agency policy, the Application Holder (drug sponsor) is ultimately responsible for cGMP compliance, even at a CDMO or CMO site. The Purolea letter reiterates this: firms retain quality accountability regardless of outsourcing ([34]). Adding AI on top of this dual-accountability creates new questions. For example: if a contract lab uses an AI tool to draft a release report or specification, the sponsor remains liable for that document’s compliance, but the contractor must also have cGMP systems to oversee it. If the AI generates a flawed test procedure that goes uncorrected and product quality suffers, which party is accountable?

Industry experts are already advising stakeholders to revise quality agreements accordingly. Any agreement between sponsor and CDMO/test lab should explicitly disclose AI usage in regulated activities and set clear controls. For instance, agreements might stipulate that AI is only used for “drafting assistance” and that all AI outputs will be reviewed and approved under the contract lab’s QU before release. Audit rights should cover examination of AI tools and logs. If AI-derived analytics are used in stability or OOS investigations, both parties should define who will validate the algorithms and sign off on conclusions. To our knowledge, no FDA guidance specifically addresses AI in quality agreements yet, so companies must rely on general GMP expectations and legal counsel.

Table 2 below lists the main CFR provisions cited in the Purolea warning letter and their AI-related implications. It highlights how fundamental requirements were at issue:

FDA Finding (Purolea)CFR RequirementRequirement DescriptionAI-Related Lesson/Implication
AI-generated documents not reviewed by Quality Unit.21 CFR 211.22(c)QU must review and approve all procedures/specifications affecting quality.Reliance on AI to draft SOPs, specifications, or records does not remove the QU’s responsibility. Every AI-generated quality-affecting document must undergo human review and approval ([2]).
Drug distributed without process validation.21 CFR 211.100Written production/process controls (including validation) required to ensure product quality.AI does not “know” legal requirements. FDA held that lack of validation is a cGMP violation regardless of AI involvement ([2]). Manufacturers cannot presume an AI tool will enforce every regulation; they must ensure all fundamental controls are in place.
Quality Unit failed to establish/follow procedures; no QU oversight.21 CFR 211.22(a,d); 211.100(a)QU must exercise authority to ensure compliance, establish procedures, and enforce process controls.These violations (e.g. no SOPs, no batch record review) reflect a systemic GMP lapse. The AI use was a symptom of an absent quality system. As one analyst noted, “the FDA didn’t write new rules for AI – [Purolea] failed because no quality system existed for the AI’s output to enter” ([35]).

Table 2: Key cGMP provisions cited in the Purolea warning letter and the lessons for AI usage. Even though AI was involved, FDA applied existing rules: the QU retains accountability under §211.22, and fundamental controls like process validation (§211.100) are mandatory regardless of any tools used ([2]).

The Purolea Cosmetics Lab Warning Letter in Detail

Company and Inspection Context

Purolea Cosmetics Lab, LLC is a California-based private-label manufacturer of skincare and “homeopathic” products. During a routine inspection (Oct 28–30, 2025) by FDA’s Detroit District, the facility was producing over-the-counter homeopathic remedies marketed under labels such as “Dermveda Extra Strength Shingles Relief” and “Dermveda Extra Strength Ultra Genital Herpes Relief.” FDA investigators found alarming conditions: evidence of filth and pests in the manufacturing area, lack of contamination controls, and products intended to treat shingles/genital herpes sold without any approved New Drug Application. ([36]) ([37]) The letter makes clear that these products were illegal new drugs (claims of disease mitigation) lacking safety/efficacy review, thus dually violating adulteration and misbranding statutes ([37]).

However, what gained wide attention in the letter was a separate “Inappropriate Use of Artificial Intelligence” section. According to the letter, Purolea’s management told inspectors that they had used one or more AI software agents to automatically create certain quality documents – specifically drug product specifications, manufacturing procedures, and master batch (production) records ([38]). The stated intention was to “help [the] firm comply with FDA regulations” by drafting these complex documents. But critically, the firm’s Quality Unit apparently did not adequately review these AI-generated outputs for correctness or compliance ([2]). Instead, when FDA asked why Purolea had not validated its manufacturing process prior to commercial distribution (a bedrock requirement under §211.100), the staff responded: “we were not aware of the validation requirement, because the AI agent we used never told us it was required.” ([2]).

This exchange – not documented but reported by FDA – is striking. It amounts to the firm saying it “delegated its regulatory knowledge to the AI.” FDA sharply rejected this. The agency restated fundamental law: “You must conduct process validation… you are not excused just because an AI did not identify the requirement.” The warning letter then emphasized that if Purolea intends to resume manufacturing (it had already ceased production) any AI output must be reviewed and cleared by an authorized human in the Quality Unit ([39]). The letter explicitly invoked the FD&C Act (section 501(a)(2)(B)) and 21 CFR §§211.22, 211.100 to warn: AI is a tool under the quality system umbrella, not a regulator or automaton.

In summary, the Purolea inspection uncovered two interrelated problems: (1) classical cGMP failures – e.g. insanitary conditions, lack of microbial testing, absent procedures and batch record review (discussed in secs. IV and V of the letter) ([36]) ([36]) – and (2) misuse of generative AI as a crutch for a deficient quality system ([2]). The former made Purolea’s products outright adulterated; the latter drove home the regulatory message that technology cannot replace qualified oversight.

Violations Cited in the Warning Letter

The Purolea warning letter contains a laundry list of CGMP deficiencies (insanitary conditions, failure to test for microbes and identity/purity, missing validations, etc.). For brevity, we focus here on the violations pertinent to AI use and overall quality responsibility:

  • Failure of Quality Unit (21 CFR 211.22): FDA found that Purolea’s Quality Unit (QU) did not ensure basic GMP compliance. The QU had not established or followed procedures (211.22(d)), had not reviewed batch records prior to release (211.22(a)), and had failed to implement adequate production and process controls (211.100(a)) . In the AI context, the letter specifically notes that drafting SOPs, specifications, and master records by AI did not absolve the QU of its duty under §211.22(c) to verify all such documents.

  • Inadmissible Delegation to AI: The letter’s “Inappropriate Use of AI” section plainly states: “If you use AI as an aid in document creation, you must review the AI generated documents to ensure they were accurate and actually compliant with CGMP. Your failure to do so is a violation of 21 CFR 211.22(c).” ([2]). This recall’s the fundamental rule: the QU alone has the authority to approve procedures/specifications affecting quality, irrespective of whether an algorithm produced them.

  • Failure to Conduct Process Validation (21 CFR 211.100): FDA noted that Purolea shipped products without the required validation of its manufacturing processes. When FDA informed the firm of this lapse, the respondents cited the AI’s omission. The warning letter flatly states that this excuse is invalid: “the AI agent you used… never told you [validation] was required.” FDA’s position is unequivocal – process validation “one of the most foundational requirements” – must be done by law, and ignorance of the law (or of its AI tool) is no defense ([2]).

  • Adulteration and Misbranding: As mentioned, FDA also determined Purolea’s products to be adulterated (insanitary conditions and GMP failures) and marketed as unapproved new drugs (homeopathic products making disease claims) ([37]) ([36]). These findings, while not AI-specific, underscore the public health gravity of Purolea’s misconduct.

In essence, the violations cited (see Table 2) show that FDA treated the AI misuse as an extension of long-standing regulatory concepts. No new regulations were created; rather, FDA applied existing cGMP rules to the novel circumstance of generative AI. All cited violations (Quality Unit oversight, process control) have analogues in previous warning letters. The innovation in this letter is simply highlighting AI as the source of the problem, to make it clear that firms cannot hide regulatory lapses behind automation.

Analysis of the AI Enforcement Findings

FDA’s treatment of AI in the Purolea letter provides several key lessons for manufacturers:

1. The Quality Unit Remains Fully Responsible

As FDA reminded Purolea (and now the industry), the Quality Unit’s responsibility “doesn’t go away because an algorithm wrote [a document] instead of a person.” ([40]) ([2]). In practical terms, this means:

  • Human-in-the-Loop: Every AI-generated piece of work that affects quality (SOP, specification, investigation report, etc.) must be examined and signed off by a qualified person in the QU ([2]) ([13]). The letter’s language is plain: “any output or recommendation from an AI agent must be reviewed and cleared by an authorized human representative of your firm’s QU” ([39]).

  • No Novel Dispensation: There are no special exemptions for AI in the regulations. The FDA did not create an “AI rule”; it applied 21 CFR 211.22(c) (requiring QU oversight) to this case just as effectively as to any other. Thus, for compliance purposes, a company must treat generative AI like any other computer tool – with validation, access control, audit trails, and documented approval workflows.

  • Audit Trail and Accountability: FDA’s expectation is that the QU must be able to trace control of AI outputs. This implicitly includes maintaining records of which AI tool was used, the inputs/outputs, who reviewed them, what changes were made, and when they were authorized. In other words, the use of AI must itself be integrated into the firm’s record-keeping (consistent with 21 CFR 211.188 on batch production records). The warning letter suggests building such an “audit trail” for AI contributions to records ([12]).

If a firm does not currently document AI usage in its cGMP records, FDA may cite that as a deficiency in future inspections. Indeed, Hotha’s recommended step 6 is to “Build the audit trail”: list each AI tool used, its purpose, outputs, reviewers, and modifications ([12]).

2. AI Does Not “Know” cGMP Requirements

Purolea’s trouble was aggravated by an unfortunate assumption: they treated their AI agent as an infallible source of regulatory knowledge. This proved false. Large language models and other AI tools are trained on existing text/data and can be outdated or incomplete. FDA’s investigator noted bluntly that Purolea’s AI “never told [them]” about the process validation requirement ([2]), which shows the danger of outsourcing compliance awareness to an algorithm.

By placing such an argument on the record, FDA made a cautionary example: AI ignorance is not an excuse. Manufacturers must ensure that all legal requirements (validation, testing, etc.) are recognized through internal procedures and training, independent of whether an AI tool flags them. In practice, companies using AI should explicitly train their teams on the limitations of these models – e.g., that models may hallucinate, may be based on outdated regulations, or may not capture nuanced FDA expectations ([41]). FDA clearly expects firms to not rely on AI for legal/regulatory completeness.

3. Validation and Data Integrity

FDA’s approach to AI use is consistent with its existing stance on computerized systems: validation and data integrity are paramount. The DLA Piper analysis of the letter notes that FDA reiterated two principles: (a) AI used for GxP must be validated, and (b) validation documentation should be part of quality records ([3]). These principles echo earlier draft guidance on AI (Jan 2025) and general software guidance.

What does this mean? If a company uses an AI algorithm to, say, predict dissolution rates or flag deviations, the model must be qualified for that task. The vendor must show (per risk-based evaluation) that the model’s outputs are accurate and reproducible. Any changes to the model (e.g. re-training on new data) must go through change control. Data fed into the AI must come from reliable, auditable sources. This aligns with ICH Q10 Figure 2 (Pharmaceutical Quality System) and with the expectation that computer systems be validated (21 CFR 211.68) ([42]).

In practice, firms should treat AI software similarly to any other validated system. If an LLM is used for SOP drafting, one could require a formal verification step (e.g. cross-check with official regulations). If an AI tool is used for predictive analytics, firms could establish performance qualification and ongoing monitoring of the model’s accuracy. FDA’s strict view in the Purolea case – that AI enrichment does not replace validation – suggests that any AI implementation must be documented in protocols or SOPs, with evidence of testing and review archived in the quality system.

4. Outsourced Partners Must Comply

As noted, when a CDMO, testing lab, or contract packager uses AI, the drug sponsor cannot simply abdicate responsibility. FDA guidance reaffirms that “contractors [are] regarded as extensions of the manufacturer” ([43]). If a contracted facility employs AI (e.g. to draft client SOPs or analyze samples), the sponsor must ensure that this usage complies with cGMP just as it would if done in-house ([44]).

Hence: Quality Agreements and Audits need updating. Clients should require disclosure of AI tools in use. Auditors may begin asking, “Are you using any AI software for GMP work? How do you control and review its outputs?” If a vendor hides AI usage from customers, they face risk of later noncompliance. Both parties should clarify, in their Quality Agreements, whether AI is permitted, for what purposes, and under what controls (e.g. human review steps, data integrity records) ([10]).

In short, the regulatory lesson is one of control and transparency: integrating AI does not diminish the sponsor’s liability. It simply adds an extra layer that the contract must manage.

5. Industry Perspectives and Expert Commentary

Industry commentators have broadly echoed FDA’s enforcement stance. Dr. Kishore Hotha, a pharmaceutical quality consultant, summarized the letter’s import concisely: “AI is a tool. The quality unit retains accountability… Every AI-generated output used in cGMP activities must undergo human review before it enters the quality system.” ([45]). Similarly, DLA Piper’s lawyers emphasize that this warning letter “demonstrates the risks of overreliance on AI,” reminding companies that “AI adoption does not minimize… the manufacturer’s ultimate responsibility” under cGMP ([46]).

Some observers have noted that FDA’s message was not an AI “ban” but a reinforcement of existing GMP doctrine. Etienne Nichols (Greenlight Guru) aptly wrote that Purolea “wasn’t a story about AI gone wrong. It was a story about a company with no functioning quality unit, trying to use AI to fill the gap” ([47]). In other words, AI simply exposed deep procedural failures. Nichols points out that 21 CFR 211.22 has always required a qualified reviewer, and that the FDA “didn’t write a new rule for AI” – it just applied the same rule that has governed pharma and devices for decades ([13]). The implication is that companies should treat this letter as a clear guide to compliant AI use: “human review, qualified reviewer, documented approval, traceable record” is now explicitly FDA’s expectation for AI workflows ([48]).

At least one commentator cautions against a purely technical fix. Some in the industry suggest building “smarter AI architectures” (with electronic signatures, audit logs, etc.) to solve Purolea’s problem. But FDA’s action shows that the core issue was not the absence of digital controls, but the absence of any quality system scaffolding. As Nichols states: “Purolea didn’t fail because their AI architecture was wrong. They failed because no quality system existed for the AI’s output to enter.” ([35]). In practical terms, this means companies should not focus only on tech solutions; they must first ensure their document controls, SOPs, and CAPA processes are sound. Only then can AI be meaningfully layered on top.

From a legal perspective, the enforcement also clarifies for compliance officers and counsel that FDA will scrutinize AI use as part of routine inspections. The DLA Piper note advises companies to: document all human reviews of AI outputs, validate AI tools before deployment, update vendor contracts to address AI compliance, and train staff that “AI tools should never be treated as compliance shortcuts.” ([49]) ([50]). In short, multiple expert sources concur: the strategy must be proactive governance not reactive justification.

Case Studies and Real-World Examples

While the Purolea case is the first of its kind for drug GMP, similar issues have arisen elsewhere in life sciences:

  • Exer Labs (Device) – As noted, the February 2025 FDA warning letter to Exer Labs concerning its AI diagnostic scanner provides a complementary lesson. There, FDA emphasized that any AI-enabled device must still comply with basic Quality System Regulation (21 CFR Part 820) – design controls, CAPA, purchasing controls, audits, etc. ([30]). Although Exer’s case was principally about premarket approval, the quality findings are instructive: FDA expects every regulated entity, device or drug, to have core quality infrastructure in place if it uses AI. Exer Labs’ shift to broader AI claims (beyond its exemption) triggered enforcement, but the deeper message was the same as Purolea’s: you cannot market a life-critical AI product while neglecting quality fundamentals ([30]) ([31]).

  • Predictive Maintenance at “Company X” – Many pharmaceutical companies quietly use AI for equipment health. For instance, one plant reduced unplanned stoppages by ~30% using an AI system that predicted pump and HVAC failures ([33]). Importantly, the vendor case study notes that the solution was integrated in a GMP-compliant way: real-time sensor data fed into a validated machine-learning model, which then generated maintainance alerts logged into the company’s maintenance and quality records ([33]). This illustrates a positive use of AI: it improved quality (by preventing lost batches) while adhering to GMP (the actions were documented and reviewed).

  • AI in Supply Chains – Some firms leverage AI for supply/demand forecasting. A study reported that AI-driven demand planning helped a pharmaceutical distributor optimize inventory, reducing expiries and shortages. In that case, the AI recommendations were checked by supply-chain experts and then incorporated into formal procurement procedures and Quality Risk Management assessments, in line with GMP’s demand for material controls. While details are proprietary, such cases underscore that AI can support but not replace qualified decision-making in regulated supply chains. (These examples, while not publicly documented, reflect typical practices in life sciences.)

From these examples, a few practical principles emerge:

  • Validate Before Deploying: Any AI model used in a quality-critical function (process control, QC test, document drafting, etc.) should be evaluated before use, similar to validating a statistical tool. This may involve retrospective comparison with known outputs, or side-by-side testing with existing processes, to build confidence and documentation.

  • Integration into Quality Systems: Successful AI implementations build the AI workflow into the company’s existing systems. For example, AI predictions might generate a document or alert that then triggers a predefined SOP or investigation form. The chain of custody is recorded in audit trails as it would be for any other key decision.

  • Human Oversight and Expertise: In all cases, the AI augments rather than replaces human expertise. For instance, if an AI highlights a potential root cause in a deviation, a trained investigator still reviews and determines corrective actions. The technology may expedite tasks, but the final decisions – especially those affecting patient safety – are held by humans.

Overall, real-world AI adoption in pharma is still in early stages, but growing. Surveys (e.g. FDA workshop participants and industry trend reports) show significant interest: one report found that 80% of young healthcare stakeholders are willing to use generative AI in life sciences, and companies are investing in AI-driven QMS modules ([51]). While detailed public case studies are limited, the breadth of pilot projects (from digital twins to AI chat assistants) confirms that pharma sees AI as a frontier of innovation ([28]) ([16]). The Purolea enforcement letter serves as a proof point that innovation now comes with heightened regulatory expectations.

Implications and Future Directions

The consequences of FDA’s actions and guidance on AI in manufacturing will unfold over time. Key implications include:

  • Corporate Strategy and Resource Allocation: The Purolea letter is likely to prompt boardroom and C-suite attention. Companies should evaluate their AI governance as part of overall quality risk management and may need to allocate resources (staff training, audits, data infrastructure) accordingly. In practical terms, quality departments may consider forming an “AI governance team” or incorporating AI checks into the internal audit calendar.

  • Regulatory Carbon Copy: Other regulators may follow suit. The EMA’s guidance agenda suggests that European regulators will hold firms to similar standards for AI in manufacturing. Globally accepted frameworks (e.g. WHO’s AI guidelines, or ISO/IEC AI standards under development) will likely echo the theme of human oversight and risk management. For example, ISO/IEC’s proposed standard on AI (still evolving) emphasizes accountability and transparency, which dovetail with FDA’s stance.

  • AI in Non-Mfg Areas: Although this warning letter was in a drug manufacturing context, the logic applies to any regulated activity where AI is used – clinical trials, labeling, pharmacovigilance, etc. Indeed, FDA’s 2025 draft guidance on AI for submissions covers models used anywhere in product development. Firms using AI in clinical data or regulatory filings should likewise ensure that each AI-generated finding is traceable and validated.

  • Standard-Setting and Guidance: Companies can expect new guidance from FDA and other bodies. FDA may issue more explicit AI recommendations (e.g. an annex to guidance on computerized systems, or updates to the Pharmaceutical CGMPs guidance). Industry groups (like ISPE and Parenteral Drug Association) will likely publish best practices. One possible development is a checklist for FDA inspections concerning AI (e.g. “Are any gaskets or sensors relying on AI? Show us the validation records”). Firms should not wait for regulators to demand disclosures; proactive adoption of AI governance is prudent.

  • Litigation and Liability: Although speculative, one could foresee legal dimensions. If an AI-assistance leads to a product quality defect, plaintiffs might claim negligence in relying on an “imperfect algorithm.” FDA’s stance provides a strong defense: the regulation itself (and its warning letter) emphasizes that the company – not the AI – is liable for the outcome. Nonetheless, legal teams will need to audit third-party AI vendors and define warranty/indemnity terms in contracts, as DLA Piper recommends ([52]).

  • Technology Evolution: As AI technology matures, new tools (e.g. domain-specific generative models trained on pharmaceutical texts) will emerge. FDA’s requirement of accountability is likely to persist. Even if future models claim built-in compliance (e.g. an “FDA-knowledgeable” LLM), firms must still independently verify such claims. It is conceivable that AI vendors in the regulated space will pre-build features like “audit logs” or “SOP validation modules” into their products, responding to this regulatory trend.

Checklist: Best Practices Going Forward

Based on the enforcement letter and expert recommendations, we summarize actionable steps (many echoed by FDA enforcement specialists and consultants) that manufacturers and their contractors should take immediately:

  • Inventory AI Use Cases: Identify every point in your quality or manufacturing workflow where an AI tool (including LLMs, neural networks, predictive analytics, etc.) is used or even considered. Review laboratory analytics, document generation, process controls, maintenance systems, supply chain tools, etc. (See Hotha’s Step 1 ([8]).)

  • Human Review Procedures: For each AI touchpoint, establish documented procedures requiring that all AI outputs are reviewed, modified if necessary, and formally approved by qualified personnel before they enter the controlled documentation or decision record. The reviewer should be knowledgeable enough to spot AI errors (e.g. an out-of-date spec or mis-modeled parameter). (Align these procedures with 21 CFR 211.22. Recall: a signature alone is not adequate review ([53]).)

  • Document AI Limitations: Train your staff to understand AI constraints. Highlight that models can produce plausible-sounding but incorrect information, and that they may not access up-to-date regulations or company-specific standards. This training should emphasize that it is the user’s responsibility to verify compliance requirements, not the AI’s.

  • Quality Agreement Updates: Immediately review existing quality agreements with CDMOs, CROs, labs and ask: “Do we know if you use AI in performing your contracted services?” Update those agreements to include: mandatory disclosure of AI use, permissible AI applications, required human oversight, and right to audit the partner’s AI governance procedures ([10]). Clearly assign who records and retains evidence of AI use and review. (If a contractor is currently using AI without disclosure, consider it an audit risk.)

  • Validation and Governance: Integrate AI into your existing validation/qualification framework. Before deploying any AI tool for a GxP purpose, perform a risk-based validation (data qualification, model performance testing, user acceptance). Treat model updates as changes requiring evaluation. Ensure that your change control and CAPA processes catch any issues arising from AI (e.g. a model drift that triggers unexpected deviations).

  • Audit Programs: Incorporate AI into internal and supplier audits. During inspections, investigators may explicitly ask about AI uses. Prepare by auditing how your teams use AI and whether controls are effective. Check that all AI-assisted documents are annotated and searchable (so that you can answer “Who reviewed this AI draft?” fast).

  • Build Evidence for FDA: FDA will expect documentation if AI is involved. Keep a log (electronic or paper) of AI usage: tool name/version, date, purpose, input data or prompt, output, reviewer name, and approval decision. If using an LLM like ChatGPT, consider saving the chat transcript plus any edits made manually. This “paper trail” will serve as evidence of control if FDA inspects.

  • Engage Consultants as Needed: FDA recommended hiring a qualified consultant to audit Purolea’s systems (21 CFR 211.34). Companies that see major AI integration might similarly seek outside expertise. Consultants with both quality and AI knowledge can perform a “six-system audit” under 21 CFR 211.34 (or a comparable review of process, documentation, controls, etc.) focusing on your AI use cases.

These steps reflect existing quality management principles applied to new tools. They are not optional shortcuts but necessary precautions. As DLA Piper summed up: “It is an important reminder that incorporating a human-in-the-loop to oversee, review, and approve all AI-generated outputs is key; the failure to do so constitutes a cGMP violation” ([54]). Any company that treats AI as “a compliance shortcut” rather than a subject of compliance control will risk FDA citations.

Conclusion

FDA’s April 2026 warning letter to Purolea Cosmetics Lab marks a turning point: the agency has moved from discussing AI in manufacturing to penalizing its misuse. Fortunately, the letter itself is unambiguous and educational. It applies existing laws – 21 CFR §§211.22(c) and 211.100 – to the novel twist of generative AI, and in doing so reinforces age-old regulatory precepts: the quality unit cannot outsource its judgment, and fundamental process controls (like validation) cannot be delegated to a computer. The letter thus provides a clear “design specification” for compliant AI use: human review with qualified sign-off, rigorous documentation, integration into the quality system and audits.

Industry stakeholders should take this warning to heart. AI holds promise for pharmaceutical manufacturing – in facts it is already delivering efficiency and insight ([17]) ([33]) – but it is not a panacea. The Purolea case teaches that “value without oversight is risk” ([55]). No AI algorithm (even a large language model trained on vast text corpora) can substitute for a living quality system staffed by skilled personnel, who understand the FDA regulations and the company’s own procedures. As one expert noted, the firm essentially inherited the AI’s “knowledge gap” wholesale, leaving a critical control absent ([56]).

Looking ahead, we anticipate FDA and other regulators will increase scrutiny of AI as part of routine inspections. Quality teams should be proactive: update processes now rather than wait for an inspectional finding. The FDA’s message is stark but constructive: keep using your brains. The technology may have advanced to generate draft procedures and spot patterns, but regulatory compliance still requires people thinking critically about every output. If organizations implement the governance steps outlined here (and in the letter’s guidance-like language), they can safely harness AI’s benefits while avoiding the pitfalls laid bare by this enforcement action.

In conclusion, the first AI cGMP warning letter underscores the immutable lesson that the tool never excuses the user. For pharmaceutical manufacturers and their partners, the takeaway is unequivocal: use AI wisely, but always keep the quality system at the wheel. Future FDA actions will likely bring more detail – perhaps formal AI guidances or Q&A – but the core principles will remain. By embracing those principles now, companies will be prepared to innovate under FDA’s watchful (but not hostile) eye.

References: All statements above are supported by cited sources, including the FDA warning letter ([2]) ([15]), industry analyses ([1]) ([3]), and research publications ([16]) ([4]), among others.

External Sources (56)
Adrien Laurent

Need Expert Guidance on This Topic?

Let's discuss how IntuitionLabs can help you navigate the challenges covered in this article.

I'm Adrien Laurent, Founder & CEO of IntuitionLabs. With 25+ years of experience in enterprise software development, I specialize in creating custom AI solutions for the pharmaceutical and life science industries.

DISCLAIMER

The information contained in this document is provided for educational and informational purposes only. We make no representations or warranties of any kind, express or implied, about the completeness, accuracy, reliability, suitability, or availability of the information contained herein. Any reliance you place on such information is strictly at your own risk. In no event will IntuitionLabs.ai or its representatives be liable for any loss or damage including without limitation, indirect or consequential loss or damage, or any loss or damage whatsoever arising from the use of information presented in this document. This document may contain content generated with the assistance of artificial intelligence technologies. AI-generated content may contain errors, omissions, or inaccuracies. Readers are advised to independently verify any critical information before acting upon it. All product names, logos, brands, trademarks, and registered trademarks mentioned in this document are the property of their respective owners. All company, product, and service names used in this document are for identification purposes only. Use of these names, logos, trademarks, and brands does not imply endorsement by the respective trademark holders. IntuitionLabs.ai is an AI software development company specializing in helping life-science companies implement and leverage artificial intelligence solutions. Founded in 2023 by Adrien Laurent and based in San Jose, California. This document does not constitute professional or legal advice. For specific guidance related to your business needs, please consult with appropriate qualified professionals.

Related Articles

Need help with AI?

© 2026 IntuitionLabs. All rights reserved.