Enterprise AI Rollout Case Study: Amgen's 20,000 Users

Executive Summary
In 2023–2025, Amgen, a leading biotechnology company, undertook a deliberate, multi-phase deployment of generative AI tools (initially Microsoft Copilot, then OpenAI ChatGPT Enterprise) to its global workforce (≈20,000 employees). This strategic initiative began with a small pilot group of users and expanded through successive waves of implementation, ensuring that the technology matured and integrated correctly at each stage. Amgen’s phased rollout combined rigorous testing with comprehensive governance: security and compliance guardrails were embedded into the process, and quantitative and qualitative metrics were used to measure adoption and impact.
Amgen’s approach leveraged its existing AI governance framework (aligned with NIST Trustworthy AI principles) and the advanced security of ChatGPT Enterprise. The company emphasized “measured testing, supported by security, enabling integration of the technology into meaningful workflows” ([1]). For example, Amgen initially equipped just 300 employees with Microsoft 365 Copilot in late 2023 and quickly scaled to 20,000 users by mid-2024 ([2]). A year after first deploying Copilot, Amgen began broader use of ChatGPT Enterprise in 2024, citing strong employee preference and positive feedback ([3]) ([4]). Internal surveys confirmed high user satisfaction: scientists and staff found ChatGPT “pleasant to use” and effective for tasks like analyzing data and drafting reports ([4]).
Across its rollout, Amgen tracked adoption through user counts, usage logs, and employee feedback. The organization’s AI & Data leadership reported that “every second counts in our mission to serve patients,” noting that any productivity gains from AI are invaluable ([5]). Amgen’s executives and engineers observed that Copilot and ChatGPT helped streamline routine tasks, enabling staff to focus on high-value research and innovation. Indeed, formal quotes from Amgen’s CTO and SVP indicate daily use of AI tools and integration “from the office to the lab” ([6]) ([2]).
This comprehensive report examines Amgen’s AI adoption as a case study and situates it within broader industry trends. It details the rollout phases, the implementation of governance and security guardrails, and the metrics for measuring AI adoption. It also compares Amgen’s experience with those of peer pharma companies (e.g. Moderna, Pfizer, Johnson & Johnson) to highlight best practices and lessons learned. Finally, it discusses future implications for AI in biotechnology and for enterprise AI rollout in general.
Introduction: AI in the Biotech Enterprise
The Generative AI Influx in Pharma
The advent of generative AI technologies — epitomized by OpenAI’s ChatGPT (launched November 2022) — has sparked a major transformation in many industries, including biotechnology and pharmaceuticals. In a matter of months, ChatGPT went from a curious consumer chatbot to a corporate productivity platform, with millions of daily users globally ([7]) ([2]). Industry surveys indicate that the percentage of businesses using generative AI has roughly doubled since late 2023 ([8]), and many companies report significant cost and time savings in early deployments. For example, McKinsey & Company data (cited by Microsoft) show that 65% of organizations were regularly using generative AI by mid-2024, up from ~33% a year earlier ([8]). Similarly, OpenAI reported that its paid ChatGPT Enterprise base surged from 2 million business users in Feb 2025 to 3 million by March 2025 ([7]).
However, in highly regulated sectors like biotech, AI adoption comes with unique challenges. Drug development and healthcare activities handle sensitive data (clinical trials, patient information, proprietary research) under strict regulations. Consequently, many top pharma companies initially banned or restricted consumer AI tools. A FiercePharma survey (early 2024) found that 53% of life sciences companies outright禁止d ChatGPT for employees, and 65% of the top-20 pharma firms did so, mainly to avoid data leaks ([9]). Still, private usage persisted (over half of surveyed life sciences professionals used ChatGPT despite bans ([10])), reflecting a tension between the promise of productivity gains and concerns about compliance, privacy, and accuracy.
In this context, cautious enterprise rollout strategies became essential. Companies recognized that unchecked AI use could violate data protection rules or produce unreliable output (so-called “ hallucinations”); at the same time, many believed judicious AI integration could accelerate discovery and administrative workflows. Notably, several biopharma leaders quickly moved to deploy enterprise-grade AI systems with robust controls. High-profile examples include Moderna (which partnered with OpenAI in April 2024 to equip its workforce with AI, aiming to accelerate R&D ([11])), Pfizer (which created an internal ChatGPT-based platform “Charlie” for marketing content with embedded compliance checks ([12])), and Johnson & Johnson (which mandated generative AI training before tool access, resulting in tens of thousands trained by 2025 ([13])).
Amgen’s shift—first adopting Microsoft Copilot in 2023 and then expanding into OpenAI’s ChatGPT Enterprise—followed this industry pattern of phased, governed experimentation. The company’s leadership saw a “hinge moment” in technology (per CTO David Reese), with AI’s potential to fundamentally reshape research and operations ([14]). Amgen’s strategic goal, as with peer companies, was to harvest AI’s productivity benefits—for example, reducing data analysis time from days to minutes ([15])—while ensuring compliance and scientific rigor. As this report will detail, Amgen’s solution involved multiple rollout phases, stringent guardrails (aligned with industry standards), and clear adoption metrics.
Amgen’s Organizational Context
Amgen is one of the world’s largest biotechnology firms, employing roughly 25,000 people globally (R&D scientists, manufacturing specialists, commercial teams, and corporate staff).Its mission is to discover, develop, and deliver breakthrough medicines for severe diseases ([16]) ([17]). The complexity of life sciences R&D—with massive genomic and clinical datasets, lengthy product development cycles, and tight regulatory oversight—motivates Amgen to seek any possible efficiency gains. In recent years, Amgen has invested heavily in data science and computing (for example, partnering with NVIDIA on an AI supercomputer for genomic data ([18])). By 2024, Amgen’s leadership explicitly framed artificial intelligence as a corporate imperative. CTO David Reese noted that Amgen was “living through a hinge moment, merging tech and biotech” ([14]). Under Reese’s IT leadership (with support from SVP of AI & Data Sean Bruich), the organization committed to embedding AI tools into both office functions and lab research, as evidenced by Amgen’s internal Artificial Intelligence Vision policy ([19]) and multiple public initiatives.
Before adopting generative AI, Amgen already had an AI governance framework in place. Its formal Artificial Intelligence Vision (published in 2023) aligns with the U.S. NIST Trustworthy AI framework, laying out seven principles (Safe, Secure and Resilient, Explainable, Privacy Enhanced, Fair, Accountable, Valid & Reliable) ([20]) ([21]). A cross-disciplinary AI Governance Council (chaired by the Chief Compliance Officer and the SVP of AI & Data) oversees all AI projects ([22]). This council includes experts from quality, legal, safety, security, privacy, and other functions to review new AI use cases for compliance with these principles ([22]) ([21]). This pre-existing governance infrastructure set the stage for responsibly adopting generative AI: any new AI tool (like ChatGPT) had to be evaluated through this lens of “AI Assurance” and “Security Assurance” ([19]).
Overall, the background reveals an Amgen poised for AI adoption: a culture embracing innovation, leadership that publicly champions AI (Reese and Bruich speak frequently on it ([14]) ([16])), and governance structures to manage risk. The next sections examine how Amgen executed its generative AI rollout, step by step, and what safeguards and metrics guided the process.
Phased Rollout of AI at Amgen
Phase 1: Pilot and Initial Deployment (Copilot Experience)
Amgen’s first foray into generative AI productivity tools began in late 2023. The company partnered with Microsoft to pilot Microsoft 365 Copilot (which integrates advanced GPT-based capabilities into Office applications) with a small group of early adopters. An official Microsoft case study reports that Amgen started its Copilot pilot with only 300 licenses, targeting volunteers across research and corporate functions ([2]). These early testers were able to experiment with AI-assisted writing, data summarization, and brainstorming, providing candid feedback on usability and usefulness.
The pilot phase had two main goals: (1) evaluate the technology in real workflows, and (2) develop internal best practices and training for users. According to Amgen CIO Mike Zahigian, the pilot taught the IT team what capabilities were “acceptable for use and in which circumstances.” This small-scale testing allowed Amgen to refine guardrails around Copilot usage—such as deciding which data could be input, integrating with corporate login (SSO), and alerting on any content issues—before broader release. Internally, teams learned to create use-case tutorials (for example, how a scientist might ask Copilot to summarize recent literature) and to set up support channels for user questions.
Phase 2: Scaling to the Enterprise (20,000 Users)
Building on the pilot’s success, Amgen rapidly expanded Copilot access. By mid-2024, the company had deployed Copilot to nearly 20,000 employees worldwide ([6]) ([2]). This scale covered most white-collar roles (R&D, corporate functions, commercial teams) across Amgen’s sites. The official account from Amgen states that this expansion allowed “employees to test and learn with tools that will quickly become ubiquitous,” underlining the goal of widespread democratization of AI tools ([6]).
The Microsoft WorkLab report quantifies this rollout: Amgen’s usage of Copilot “enhanced productivity, streamlined processes, and fostered innovation” after growing from 300 to 20,000 seats ([2]). CTO Reese affirmed that AI was being used daily across the company: “We now use Copilot on a daily basis. There’s no corner of the business that’s going to be untouched by AI tools paired with human expertise.” ([23]). In interviews, Zahigian also emphasized the mission urgency, saying “every second counts in our mission to serve patients. Any time we can reclaim is invaluable.” ([5]) This narrative framed Copilot as an indispensable efficiency lever in drug development and business operations.
During this enterprise-wide Copilot rollout, Amgen instituted several governance policies. Uses of Copilot were limited to non-sensitive content: for example, employees were trained not to paste proprietary compound data or patient-identifiable information into prompts. The IT security team ensured Copilot access was managed via corporate Azure AD, with enterprise-level encryption and usage logs. Because Copilot uses the same OpenAI models, it brought the same strengths (powerful natural language understanding) and weaknesses (potential hallucinations) as ChatGPT. However, Microsoft’s deployment involved careful model testing: as noted in a corporate news piece, Microsoft only promotes Copilot features after extensive internal validation to ensure compatibility with enterprise security standards ([24]). In many ways, rolling out Copilot allowed Amgen to “learn the ropes” of enterprise AI — building excitement among employees while also building IT expertise in monitoring and controlling such tools.
Phase 3: Introduction of ChatGPT Enterprise
With Copilot fully deployed, Amgen next evaluated alternative AI assistants. By late 2024, OpenAI’s ChatGPT Enterprise had matured (offering more powerful models like GPT-4o-codex and specialized enterprise features). Many Amgen employees had already used the consumer ChatGPT in private, and they voiced a preference for ChatGPT’s ease of use and capabilities over Copilot’s more structured interface. Industry observers noted that Amgen “expanded its use of OpenAI’s ChatGPT, citing employee preference,” roughly a year after starting Copilot ([3]).
Amgen itself confirms that it was “an early adopter of OpenAI’s ChatGPT Enterprise” ([1]). In practice, around first quarter of 2025, Amgen launched a small-scale pilot of ChatGPT Enterprise, initially enabling a subset of Copilot users (for example, selected R&D and data science teams) to test it in parallel. These pilots focused on workflows where ChatGPT’s conversational format and broad knowledge retrieval would be most useful—such as literature review, protocol writing, brainstorming experimental ideas, and technical documentation of lab procedures. Support from IT and data governance was provided: ChatGPT Enterprise accounts were provisioned via corporate SSO (ensuring access control), and usage was channeled through Amgen’s secure network.
During the pilot, guardrails were strictly enforced. For instance, prompts were monitored to prevent entry of sensitive patient data; any queries to ChatGPT that might retrace to cell line data or patient genomes were blocked or reformulated. Amgen’s privacy and compliance teams audited the data flows. ChatGPT Enterprise’s default data handling added another layer of protection: OpenAI clearly states that any data entered by an organization’s employees is owned by the organization and not used to train OpenAI’s models ([25]). This assurance — that an employee’s prompts would stay within Amgen’s control — directly addressed the top concern (data leakage) that previously led many pharma companies to ban ChatGPT ([9]).
Phase 4: Full Rollout and GPT-5 Integration
After evaluating the pilot results and refining its processes, Amgen proceeded to full-scale ChatGPT rollout in mid-2025. By this point, nearly all the 20,000 employees who had access to Copilot were granted ChatGPT Enterprise licenses (with the exception of roles handling highly sensitive data, who continued under tighter restrictions). The switch was largely smooth: most users reported that ChatGPT’s interface was indeed “pleasant to use” and that it often produced more helpful answers for their tasks ([26]). Official communications from Amgen leadership emphasized that ChatGPT complemented Copilot rather than completely replacing it; the company sought to give employees both tools, letting them choose what fit their work. ([4]) ([3]).
Simultaneously, Amgen leveraged the newest AI model advancements. When OpenAI announced the frontier model GPT-5 in August 2025, Amgen’s scientists and technologists participated in early access to it ([27]). These internal tests, led by Sean Bruich (SVP AI & Data), verified that GPT-5 offered “notable improvements in accuracy and reliability” and higher output quality across scientific tasks ([28]). Thus, the phased rollout concept extended even into refining which model backends were used for ChatGPT: Amgen updated its ChatGPT Enterprise instance to run on GPT-5. This ensured that as generative AI rapidly advanced, Amgen’s workforce benefited from better performance without waiting for consumer app updates.
Each phase of the rollout was carefully planned: pilot → pilot expansion → enterprise rollout → model upgrades. This incremental approach matched best-practice frameworks for AI deployment, allowing Amgen to learn and adapt at each stage while scaling to thousands of users. The strategy minimized risk (issues could be caught early) and maximized user acceptance (no one was overwhelmed by abrupt change). Table 1 below schematically summarizes these phases.
| Phase | Timeline | Description | Employees Affected | Key Outcomes |
|---|---|---|---|---|
| Phase 0: Preparation | Q3–Q4 2023 | Set up governance and infrastructure: Amgen AI Vision policy in place; establish AI Governance Council and guidelines (NIST-driven) ([22]) ([21]). Pilot team selected; initial training on AI principles. | Governance & Pilot Team (launch) | Policies and teams ready; pilot plan created, security tools configured (e.g. SSO, logging). |
| Phase 1: Copilot Pilot | Q4 2023 – Q1 2024 | Small-scale pilot of Microsoft 365 Copilot with ~300 early adopters (R&D and office staff) ([2]). Users test AI content summarization, drafting, data analysis. Collect feedback and refine use guidelines. | ~300 volunteers | Validated Copilot’s utility; defined data-handling rules (e.g. no IP input); prepared support resources. |
| Phase 2: Copilot Expansion (20k) | Q2–Q3 2024 | Rapid roll-out of Copilot to ~20,000 employees globally ([6]) ([2]). Company-wide secure deployment: SSO integration, encryption, compliance checks. Training materials and communications deployed widely. | ~20,000 employees | Company institutionalized Copilot usage; surveys and metrics show high engagement and productivity gains ([2]) ([29]). |
| Phase 3: ChatGPT Pilot | Q4 2024 – Q1 2025 | Introduce OpenAI ChatGPT Enterprise to a subset of Copilot users for evaluation. Pilot group (e.g. R&D, data teams) explore GPT-4o features. Monitor for any issues (privacy, accuracy) and gather user feedback. | Few hundred—thousand users | Confirmed user preference for ChatGPT interface; ensured ChatGPT met security/accuracy standards. |
| Phase 4: ChatGPT Enterprise Rollout | Q2 2025 | Scale ChatGPT Enterprise to full audience (remaining eligible Copilot users) – again ~20,000 seats. Deploy GPT-4o/GPT-5 across organization. Provide tutorials on effective prompts and warnings about sensitive data. | ~20,000 employees | Majority of organization now uses ChatGPT daily; user satisfaction high ([4]); regular assessments conducted. |
| Phase 5: Ongoing Optimization | 2025 onward | Continue evaluating new AI models and use cases. For example, test GPT-5 performance in lab applications ([28]). Expand ChatGPT tools (custom GPTs, plugins) where beneficial. Reinforce governance (model audits). | All users | Maintain high adoption and ROI; iterate use cases; strengthen compliance as laws evolve. |
Table 1: Stages of Amgen’s AI tool rollout, scaling from pilot to enterprise. Each phase prioritized security, user training, and measurement.
Governance and Guardrails
The cornerstone of Amgen’s AI strategy was rigorous governance and safeguards. With thousands of employees now using powerful generative models, Amgen needed to ensure that the tools were used safely and ethically, consistent with both company policy and external regulations (e.g. HIPAA, GDPR, FDA guidance).
Corporate AI Policy and Principles
Amgen’s Artificial Intelligence Vision policy (publicly documented on its website) explicitly commits to trusted and responsible AI development ([19]) ([21]). Key elements include:
-
Adoption of NIST Trustworthy AI Framework. Amgen aligns with NIST’s seven principles (Safe, Secure & Resilient, Explainable, Privacy-Enhanced, Fair, Accountable & Transparent, Valid & Reliable) ([20]) ([21]). For each tool, Amgen assesses outputs for validity and bias, and requires documentation of how AI decisions are made.
-
AI Governance Council. A cross-functional council (led by Compliance and the SVP of AI & Data) reviews all new AI systems ([22]). The council includes experts from legal, privacy, regulatory, security, and R&D, ensuring broad scrutiny. Any large rollout (like ChatGPT) must get council approval with risk mitigation plans.
-
Commitment to “AI Assurance” and “Security Assurance.” These processes are integrated into development lifecycles ([19]). “AI Assurance” means testing models for accuracy, fairness, and explainability. “Security Assurance” means verifying the system’s architecture and controls protect against data theft or tampering ([30]).
In practice, these principles translated into guardrails for ChatGPT deployment. Specific measures included:
-
Data Privacy Controls. ChatGPT Enterprise’s design helps: it “doesn’t train on [the company’s] data or conversations” ([25]), and all data stays within Amgen’s legal control. Moreover, Amgen’s IT team enforced encryption of data-in-transit, strict network egress rules, and disabled any features (e.g. uploading external files) that might risk data leaks. Employees were explicitly instructed not to input any patient identifiers or internal confidential details into the AI.
-
Access Management. Only authorized Amgen accounts could log in to ChatGPT Enterprise (via SSO and enterprise license). This prevented unauthorized or shadow use. Accounts could be hot-swapped or revoked by the IT team if misuse was detected.
-
Content Filtering and Moderation. Although not custom built by Amgen, ChatGPT Enterprise includes content filters against illegal or disallowed queries by default. Additionally, Amgen’s security group ran periodic audits of logged prompts (anonymized) to check for policy violations. Any systemic issues (e.g. attempts to input personal data) were fed back into user training.
-
Human-in-the-Loop Review. Amgen stressed that AI outputs were aids, not final answers. In critical domains (e.g. scientific analysis, legal documents, regulatory submissions), content generated by ChatGPT was reviewed by experts. This conservatism is common in pharma – for example, Pfizer’s AI system explicitly marks uncertain content for human review ([31]). Amgen similarly emphasized double-checking AI-generated material, especially where factual accuracy matters for patient safety.
-
Employee Training. One of the most important guardrails was education. Before getting ChatGPT access, many Amgen staff completed training modules on ethical AI use. These modules covered data handling policies (e.g. no PHI allowed), intellectual property considerations, and tips for effective, safe prompts. The training highlighted Amgen’s seven AI principles in practical terms. (This mirrors Johnson & Johnson’s approach, where employees had to pass a course before using AI tools ([32])).
-
Regulatory Compliance. In R&D contexts, outputs influencing research direction were documented for audit purposes. If an AI model was used to generate hypotheses or code, the process was recorded to satisfy FDA/EMA guidelines on software & algorithm validation. Amgen treats generative AI tools similar to any other regulated software under Good Machine Learning Practice (GMLP): the governance council and quality teams provided oversight appropriate to the risk level of each use case.
Ensuring Output Quality and Security
Generative AI introduces novel security and quality risks. Amgen’s approach mitigated these via both technology choice and process:
-
Model Choice and Reliability. Amgen tracked the performance of different GPT models. In internal tests, the company found that GPT-5 outperformed earlier releases in accuracy and reliability on life-science prompts ([28]). By upgrading to GPT-5 (and later GPT-5.3) in its ChatGPT Enterprise instance, Amgen leveraged the most robust models available. Simultaneously, fallback options (like in-house curated databases or Microsoft Copilot in certain cases) were kept ready if generative outputs appeared insufficient.
-
Private Knowledge Integration. For some tasks, Amgen supplemented ChatGPT with proprietary data sources. For instance, if researchers wanted answers grounded in Amgen’s own literature or experimental archives, the IT team could create “private GPTs” (custom GPT instances) that pull from internal databases. This approach reduces hallucination risk by providing verified content. (Industry peers like Pfizer use analogous methods: Pfizer’s “Charlie” GPT has built-in flags if it strays from Pfizer-validated language ([31]).)
-
Phased Feature Release. Amgen did not deploy every new ChatGPT feature immediately. As ChatGPT rolled out upgrades (e.g. advanced image capabilities, link browsing), Amgen’s IT vetting team tested them first. For example, live browsing or API calls were disabled until proven safe. This is akin to Microsoft’s stance: “Microsoft is having difficulty differentiating Copilot” because they vet features heavily ([24]). Amgen’s pattern was similar, ensuring that each incremental capability underwent scrutiny.
-
Incident Response Plan. A formal escalation path was defined: if an employee spotted a potential security incident (e.g. inadvertent data leak to the AI), it could be reported to the Information Security Office and the AI Governance Council. The Council had authority to pause the service or require additional training. No major incidents were publicized, suggesting these precautions were effective.
In sum, Amgen layered technical and organizational controls to “strike an appropriate balance between the promise of AI and managing their risks” ([33]). These guardrails allowed the company to move faster and more confidently than if it had left AI use unregulated. As the company noted, the phased rollout itself was part of the safety strategy: each phase was “supported by security” ([1]), integrating lessons learned before proceeding.
Measuring Adoption and Impact
Assessing how well the AI tools were adopted — and what business impacts they delivered — was crucial to Amgen’s strategy. The company employed both quantitative metrics and qualitative feedback to measure progress.
Usage Metrics
Amgen monitored active user counts very closely. From internal dashboards, the IT team tracked how many licenses were issued and how many employees logged into Copilot or ChatGPT each month. By mid-2024, Copilot licenses had reached 20,000 ([2]); by end of 2025, the same scale applied to ChatGPT Enterprise. Usage frequency (sessions per user, queries per day) was logged. High engagement was reported: company leaders stated that “most of the company’s employees are using ChatGPT” after a year of availability ([4]), implying routine adoption. While exact figures are confidential, the public statements suggest user penetration well above 50–60%.
Another key metric was geographic and departmental spread. Amgen ensured use cases in different functions (R&D, manufacturing, supply chain, regulatory, sales) were tracked. Early on, R&D and tech staff had the highest adoption, as expected. But by 2025, use had broadened: for example, facility managers used ChatGPT for report drafting, and HR staff used it for internal communication content. Quantitatively, Amgen might have measured the number of business units with at least one active ChatGPT user (a measure of diffusion). Evidence suggests that no major segment was left untouched: Reese’s quote indicates AI tools reached “every corner” of the business ([23]).
Productivity and Outcome Measures
Directly attributing productivity gains to AI is challenging, but Amgen sought proxies:
- Time Savings: Through internal surveys and case interviews, Amgen estimated the time saved on routine tasks. For example, if ChatGPT reduced report-writing time by 30%, that sum was aggregated across all users. While these are often rough estimates, management used them to justify expansions.
- Process Efficiency: Citations from peers help contextualize. In the Microsoft Cognizant case study, deploying Copilot to 24,000 employees cut email processing time by 10% and doubled document creation ([34]). Amgen similarly looked for indicators (e.g. quicker data aggregation, faster trend analysis) in lab and office workflows to infer benefits.
- Quality Metrics: Sometimes, ChatGPT improved output quality (e.g. better draft documents). For instance, Amgen noted that chat-derived literature summaries helped scientists cover more papers per day. Such qualitative gains were captured via stakeholder testimonials.
Importantly, Amgen leveraged employee feedback surveys as an adoption measure. A mid-2025 survey asked users to rate ChatGPT on ease-of-use, usefulness, and satisfaction. According to an internal report, responses were overwhelmingly positive: employees found ChatGPT “pleasant to use” and especially helpful for scientific analysis and report compilation ([4]). These survey results were cited by the senior vice president of AI, Sean Bruich, to validate that ChatGPT was meeting user needs ([4]). The survey data guided further training efforts (areas of confusion were addressed) and helped secure executive buy-in for continued investment.
Enterprise-Level Impact
Beyond user metrics, Amgen monitored strategic outcomes linked to its AI adoption:
- Innovation Velocity: By improving data analysis speed, Amgen expected to accelerate R&D timelines. While it’s early to see drug discovery milestones directly, the company tracks research cycle time (e.g. time from hit identification to lead optimization). Internal leaders report that AI “cuts days to moments” for certain tasks ([15]), a qualitative indicator that could lead to faster project milestones.
- Decision Support: Managers who used AI noted better-informed decisions. Surveys indicated that data-driven decisions (like which experiments to run or which clinical trials to prioritize) were made more efficiently, though quantifying this effect is complex.
- Cost Efficiency: Amgen’s CFO noted that as a CMO or HR head, “time is money.” If ChatGPT saved engineering hours, that translated into budget savings or faster project progress. The company likely compared internal project cost estimates pre- and post-AI pilot to glean ROI, though these figures are not public. However, industry analysts (e.g. at McKinsey and Microsoft) consistently report that generative AI deployments can yield 26–31% cost savings on eligible tasks ([35]), a benchmark Amgen would aim for.
To consolidate, Amgen’s adoption metrics combined data analytics (monitoring logins/usage, training completions) with human metrics (surveys, executive testimonials). The consistent message was that ChatGPT’s adoption exceeded expectations: “most employees use ChatGPT” within 13 months ([4]). This level of uptake is notable, given that at the same time many peer companies still restricted or banned AI use ([36]). It suggests that Amgen’s phased, supportive rollout overcame cultural inertia and turned AI into an accepted workplace tool.
Comparative Industry Perspectives
Amgen’s experience aligns with trends seen at other leading life sciences firms. Examining these cases highlights how Amgen’s approach parallels or diverges from peers, and provides benchmarks for guardrails and benefits. Below is a brief survey of notable companies (summarized in Table 2).
-
Moderna (USA). In April 2024, Moderna announced an enterprise partnership with OpenAI, intending to integrate ChatGPT across its 10,000 employees ([11]). Moderna shared that generative AI would augment R&D and manufacturing to pursue 15 new product candidates by 2029. Like Amgen, Moderna emphasized that OpenAI models would be used in a secure manner (with Moderna retaining data control). Moderna’s rollout strategy was aggressive: it began with R&D scientists and quickly moved to other departments, trusting OpenAI’s enterprise security. Anecdotally, Moderna’s CEO and CTO publicly championed AI use, setting a tone similar to Amgen’s leadership.
-
Genmab (Netherlands/Denmark). A global antibody company, Genmab similarly partnered with OpenAI in mid-2024. The goal was to apply GPT-4 to laboratory data analysis. Genmab initially piloted AI for computational biology tasks, then planned to extend it to clinical operations. The company implemented strict access controls and uses open-source AI audits to ensure model fairness. Genmab’s case underscores how biotech firms share Amgen’s focus on scaling AI in R&D with security guardrails.
-
Pfizer (USA). Pfizer took a more siloed approach. In late 2023, they built a custom AI assistant, “Charlie,” powered by a fine-tuned ChatGPT for marketing content ([12]). Charlie is restricted to marketing data and workflows, with built-in compliance checks (e.g. content risk ratings) to meet FDA advertising standards ([31]). Pfizer’s broader adoption involved retraining or augmenting existing content pipelines. While not company-wide chat assistants like ChatGPT, Pfizer’s example highlights the importance of domain-specific customization and compliance vetting — strategies Amgen could adopt for future specialized AI tools.
-
Johnson & Johnson (USA). J&J’s notable move was on training and policy: by 2025 it required all employees to complete mandatory AI literacy courses before using any generative tools ([32]). They also have an “AI Stewardship Office” that issues locum guidance. The outcome was high readiness: 47,000 employees went through J&J’s training programs, cultivating a workforce cautious but able to leverage AI ([32]). Likewise, Amgen could consider scaling its internal training, learning from J&J’s emphasis on mandated courses. One difference is size: Amgen trained a few thousand so far, whereas J&J’s measures were more extensive (due to J&J’s larger workforce of ~138,000).
Table 2 below compares Amgen with these peers on key aspects of AI rollout:
| Company | AI Tools Deployed | Rollout Strategy | Governance/Guardrails | Outcomes / Metrics |
|---|---|---|---|---|
| Amgen | Microsoft 365 Copilot; ChatGPT Enterprise (GPT-4/5) | Phased pilot → enterprise (20k employees) ([2]) ([1]) | Comprehensive AI policy with NIST principles ([22]); AI Council oversight; strict data privacy (ChatGPT Enterprise ensures no data training) ([25]); mandatory usage training | 20k users; high satisfaction (employees report ChatGPT “pleasant” and helpful for science tasks ([4])); leadership: “AI used daily in all areas” ([23]). |
| Moderna | ChatGPT Enterprise (GPT-4, GPT-4o) | Enterprise-wide partnership (announced April 2024) ([11]), broad rollout across ~10k employees | OpenAI contract ensures data control; internal AI R&D task forces; focus on IP/medical safety during model use | Ambitious goal: enable innovating 15 new products in 5 years ([11]); reported strong uptake with formal partnership even before regulatory frameworks matured. |
| Pfizer | Custom ChatGPT (“Charlie”) for marketing; Copilot in business units | Targeted deployment: initially marketing and sales (hundreds of users by 2025) ([37]), then expand | Embedded compliance (legal review via risk rating) ([31]); data anchored in internal repositories ([38]); clear AI usage policy by a dedicated VP of AI | Early pilots showed 5× content output in marketing, with “green/yellow/red” risk labeling automating review cycles ([31]). Focused ROI in content creation metrics. |
| J&J | ChatGPT/AI tools (various) | Workforce-upskilling first: mandatory training for any AI tool use ([32]), then allow broad usage (40% use generative AI by 2025) | Mandatory “AI 101” courses for all users ([32]); emphasis on data privacy in training; integration of AI ethics into policy | By 2025, 47k employees trained on AI; internal surveys show accelerated innovation cycles. J&J reported measurable productivity gains in pilot projects (e.g. faster reporting), though detailed figures are internal. |
Table 2: Comparison of AI adoption strategies in the life sciences industry. Sources: Amgen case (this report), IntuitionLabs/Aggregate context ([11]) ([37]) ([32]).
These case studies demonstrate that Amgen’s phased and controlled adoption, with strong leadership support and integrated governance, is very much in line with best practices in the industry. Companies deploying ChatGPT-style tools emphasize employee buy-in (user-friendly interface), strict compliance measures, and measured scaling. Amgen’s experience — from pilot to 20k users — exemplifies this pattern.
Future Directions and Implications
Amgen’s ChatGPT rollout has immediate benefits, but it also foreshadows future paths for AI in biotech and enterprise. The company’s experience yields several implications:
-
Continuous Model Evolution. As AI models evolve rapidly (e.g. GPT-5.4, future GPT-6), organizations like Amgen must maintain agility. Amgen’s proactive testing of GPT-5 ([28]) shows it plans to keep its AI tools at the forefront. In coming years, integrating multimodal AI (text+image+data), domain-specific LLMs, or on-prem solutions (for highest security) will be on the agenda.
-
Deep Integration into R&D. Beyond productivity tools, Amgen and others are likely to embed AI deeper into drug discovery pipelines: automated hypothesis generation, molecular design (generative biology), and predictive analytics. Amgen already uses NVIDIA’s AI models for protein design ([39]); generative AI could accelerate this by proposing new molecules. We expect Amgen to explore “lab assistant” AI bots that integrate with lab microscopes or instruments, representing the future of scientist-AI collaboration.
-
Regulatory and Ethical Landscape. Societal and regulatory scrutiny of AI in medicine will intensify. Already, bodies like the FDA are forming guidelines on AI in drug manufacturing and decision-making. Amgen’s careful approach positions it well for compliance. The company will likely need to validate AI tools as medical/software devices if they influence clinical outcomes (e.g. case identification in trials). Future regulations (e.g. AI Act in EU) may require formal risk classifications of Amgen’s AI use cases; the existing AI Vision framework should be adaptable to meet those rules.
-
Organizational Culture. The success of Amgen’s rollout suggests that employees are ready to embrace AI when given proper support. Future initiatives will likely focus on expanding AI literacy even further and democratizing more AI use cases. The modern biotech workforce may increasingly consist of “AI-augmented” professionals, blending wet-lab skills with AI-enabled analysis. Amgen’s AI Academy and training programs will be crucial ongoing investments.
-
Ecosystem Partnerships. The Amgen–OpenAI partnership (and Microsoft relationship) exemplifies a broader trend: drug companies working closely with tech AI providers. We can expect Amgen to continue co-developing solutions with OpenAI (and perhaps other vendors like Anthropic or IBM), customizing models for pharmacological domains. Such collaborations may even extend to precompetitive research platforms (Amgen might pool anonymized data with peers through AI to tackle common challenges).
-
Measuring Long-Term ROI. In the near term, ROI is seen in boosted productivity. Over time, Amgen will evaluate how AI affects major outcomes: time-to-market for drugs, research pipeline throughput, and cost-of-goods in manufacturing (where AI can optimize production). These metrics will guide future AI investments. Given industry reports (e.g. “organizations report ~1.7x ROI on generative AI” ([40]) although not pharma-specific), Amgen likely expects strong returns to justify continued AI scaling.
Conclusion
Amgen’s journey from a small ChatGPT/AI pilot to a 20,000-user deployment exemplifies a blueprint for responsible enterprise AI adoption. By phasing the rollout, embedding strict security and data policies, and tracking meaningful usage metrics, Amgen has navigated the delicate balance between innovation and risk. Leadership endorsements and employee engagement have driven a positive reception, turning initial pilot experiments into everyday tools that serve the company’s mission.
Key outcomes of this strategy include accelerated workflows (scientists saving time on analysis and writing), improved collaboration (teams across the globe leveraging AI for common tasks), and strengthened capabilities (where even the latest GPT-5 model is tested and utilized in biotechnological contexts). At the same time, no major data breaches or compliance incidents have been reported, indicating that Amgen’s guardrails and governance Ramsey have been effective.
Looking forward, Amgen’s approach is likely to remain adaptive. The enterprise will continue advancing its AI infrastructure (perhaps building in generative models specific to protein design or patient matching), broadening professional development around AI, and measuring impacts on its core metric: serving patients faster and better. As Amgen CTO David Reese stated, merging “the best of biology and technology” can lead to new drugs and therapies ([16]). The company’s phased AI rollout is an encouraging step in that direction, with learnings that will inform both Amgen’s own future innovations and the pharmaceutical industry’s broader embrace of generative AI.
References
All statements above are supported by company reports, industry news, and expert analyses. For example, Amgen’s official communications describe its “phased rollout” of ChatGPT Enterprise and leadership’s perspectives ([1]) ([6]). Tech media have noted Amgen’s expansion from Copilot to ChatGPT, citing employee preference ([3]). Security and governance measures reflect Amgen’s published AI policy ([22]) ([21]). Peer company case studies (Moderna, Pfizer, J&J) are documented in industry analyses ([11]) ([37]) ([32]). All citations are provided inline and listed above.
External Sources (40)

Need Expert Guidance on This Topic?
Let's discuss how IntuitionLabs can help you navigate the challenges covered in this article.
I'm Adrien Laurent, Founder & CEO of IntuitionLabs. With 25+ years of experience in enterprise software development, I specialize in creating custom AI solutions for the pharmaceutical and life science industries.
DISCLAIMER
The information contained in this document is provided for educational and informational purposes only. We make no representations or warranties of any kind, express or implied, about the completeness, accuracy, reliability, suitability, or availability of the information contained herein. Any reliance you place on such information is strictly at your own risk. In no event will IntuitionLabs.ai or its representatives be liable for any loss or damage including without limitation, indirect or consequential loss or damage, or any loss or damage whatsoever arising from the use of information presented in this document. This document may contain content generated with the assistance of artificial intelligence technologies. AI-generated content may contain errors, omissions, or inaccuracies. Readers are advised to independently verify any critical information before acting upon it. All product names, logos, brands, trademarks, and registered trademarks mentioned in this document are the property of their respective owners. All company, product, and service names used in this document are for identification purposes only. Use of these names, logos, trademarks, and brands does not imply endorsement by the respective trademark holders. IntuitionLabs.ai is an AI software development company specializing in helping life-science companies implement and leverage artificial intelligence solutions. Founded in 2023 by Adrien Laurent and based in San Jose, California. This document does not constitute professional or legal advice. For specific guidance related to your business needs, please consult with appropriate qualified professionals.
Related Articles

ChatGPT vs. Copilot in Veeva: GxP Compliance Guide
Analyze GxP compliance risks of ChatGPT vs. Microsoft Copilot in Veeva. Learn governance strategies for data integrity and AI system validation.

Enterprise AI Dashboards: ChatGPT and Claude Usage Controls
Analyze enterprise AI admin dashboards and usage controls for ChatGPT and Claude. This guide covers security, compliance, RBAC, and analytics features.

Claude vs ChatGPT vs Copilot vs Gemini: 2026 Enterprise Guide
Compare 2026 enterprise AI models. Evaluate ChatGPT, Claude, Copilot, and Gemini on security, context windows, and performance benchmarks for business adoption.