ChatGPT Adoption in the Life Sciences Industry

[Revised January 18, 2026]
ChatGPT Adoption in the Life Sciences Industry
Introduction
The advent of OpenAI’s ChatGPT has sparked significant interest in the pharmaceutical and biotech sectors. Within just two years of ChatGPT’s launch, generative AI has evolved from a novelty to a boardroom priority in healthcare and life sciences ([1]). Companies see enormous potential to accelerate research, streamline operations, and improve decision-making with these tools. At the same time, many firms are proceeding with caution due to data security and compliance concerns ([2]). In fact, a recent survey of 200+ life sciences professionals found over half of their companies banned employees from using ChatGPT, including 65% of the top 20 pharma companies, chiefly to prevent leakage of sensitive data ([2]). Yet despite official restrictions, individual scientists and staff often still experiment with ChatGPT – more than half of respondents use it at least a few times per month, and over a quarter use it weekly or daily ([3]). This dichotomy underscores the careful balance pharma organizations must strike between leveraging AI’s benefits and managing its risks.
This report provides an in-depth look at U.S.-focused life sciences companies (pharmaceutical, biotech, and diagnostics) that have publicly acknowledged using ChatGPT in their operations. We identify these companies and cite sources confirming their use. We also examine how they are applying ChatGPT or similar generative AI – from drug R&D and clinical trials to internal knowledge management, marketing content, customer engagement, and regulatory affairs. Furthermore, we present key statistics on AI adoption in the industry (e.g. uptake rates, common use cases, value potential) to contextualize these case studies. A summary table is included for quick reference, and detailed sections follow with a professional analysis of each example. The goal is to inform IT and innovation leaders in pharma about real-world generative AI implementations among their peers, focusing on companies active in the U.S. market.
AI Adoption in Pharma: Trends and Statistics
Life sciences companies are unmistakably investing in AI, and the industry is now approaching a "tipping point" as rising costs, regulatory changes, and growing competitive urgency converge ([4]). Interestingly, 83% of surveyed life sciences professionals called AI "overrated" – yet only 8% said their company hadn't begun adopting AI in some form ([5]). By Q4 2025, KPMG's poll of more than 100 life sciences CEOs found that 76% felt their organizations were moving at the right pace to handle the speed of AI developments ([6]). This shows that behind cautious rhetoric, many organizations are actively implementing AI solutions.
Generative AI (like ChatGPT) is a major part of this trend. Industry analysts estimate generative AI could contribute $60–110 billion in annual value for pharma and medical product companies by improving productivity ([7]). The global AI in pharmaceutical market is estimated at $1.94 billion in 2025 and is forecasted to reach around $16.49 billion by 2034, accelerating at a CAGR of 27% ([8]). According to Menlo Ventures' research, 22% of healthcare organizations have implemented domain-specific AI tools in 2025, a 7x increase over 2024 and 10x over 2023 ([9]). The primary motivations remain efficiency and cost savings – 64% of pharma professionals said they look to AI for cost reduction, versus only 17% who view it as a driver of revenue ([10]).
Early adoption data highlights which applications are gaining traction. According to ZoomRx survey results, the most common use cases of AI in biopharma so far are in drug discovery, followed by personalized medicine, copywriting/content generation, and clinical trial optimization ([10]). This aligns with the generative AI examples we see publicly: companies are using ChatGPT to sift scientific literature for new targets, draft or summarize documents, and assist in trial design and patient recruitment. Notably, generative AI is also viewed as a powerful tool for supporting regulatory compliance (e.g. drafting reports) and marketing. For instance, McKinsey research suggests AI could halve content creation costs in pharma marketing and substantially speed up review cycles ([7]).
At the same time, data privacy and security remain paramount concerns. The prevalence of ChatGPT bans at big pharma underscores fear of unintentionally exposing confidential data ([2]). However, companies are addressing these concerns through governance structures: about 80% of pharma leaders reported that their companies have already created a dedicated AI governance structure, and 20% are "in the process" of setting one up, with ethics and safety being the main focus for 80% of those structures ([4]). Some organizations have chosen a middle path – enabling use of ChatGPT but within controlled, internal platforms to safeguard information. The case studies below illustrate this approach. For example, one pharma developed a proprietary interface to allow 50,000+ employees to access ChatGPT and other models securely, ensuring prompts and outputs don't leak externally ([11]).
In summary, the life sciences industry recognizes generative AI as a potential game-changer to accelerate R&D and operations, evidenced by broad experimentation and several high-profile implementations. Still, adoption is uneven – while leaders like Moderna and Pfizer are openly embracing ChatGPT, others are holding back or restricting use. The next sections detail specific companies that have publicly confirmed using ChatGPT or GPT-based solutions, what they use it for, and how it fits into their digital strategy.
Notable Life Sciences Companies Using ChatGPT
Multiple pharmaceutical and biotech companies with U.S. operations have announced or acknowledged using ChatGPT (or custom versions of it) in their business. Table 1 provides a summary of key examples, including the use cases, departments involved, when it was first reported, and sources. These range from global pharma giants to innovative biotechs. Each of these cases is explored in more detail in the subsequent sections.
Table 1. Examples of Life Sciences Companies (U.S.-Focused) Using ChatGPT or Generative AI
| Company | ChatGPT Use Case(s) | Department / Function | First Reported | Source |
|---|---|---|---|---|
| Moderna | Internal ChatGPT Enterprise deployment (“mChat”) with 80% employee adoption; 750+ custom GPT assistants (e.g. DoseID for trial dose selection) ([12]) ([13]). | Company-wide (R&D, clinical, manufacturing, legal, commercial) | Apr 2024 (Press Release) ([14]) ([12]) | Moderna/OpenAI announcement ([12]) ([13]) |
| Pfizer | “Charlie” – a generative AI content platform powered by a custom ChatGPT version for marketing content creation, editing, fact-checking, and workflow integration ([15]) ([16]). Also exploring GPT for internal research queries and analytics ([15]). | Marketing and Sales (content supply chain); Internal Knowledge Queries | Feb 2024 (Media Report) ([17]) ([15]) | Digiday interview (Pfizer) ([17]) ([15]) |
| Johnson & Johnson | Generative AI for document summarization and productivity; launched mandatory training (56k+ employees trained) and a governance program to safely enable tools like ChatGPT ([18]) ([19]). Use cases span R&D, supply chain, finance, etc. after certification. | Enterprise-wide (with emphasis on employee upskilling for AI) | Mar 2025 (Business press) ([18]) | Business Insider (J&J CIO) ([18]) ([20]) |
| Merck & Co. (MSD) | Proprietary “GPTeal” platform gives ~50k employees secure access to ChatGPT, Meta Llama, Anthropic Claude, etc., to assist in writing emails, memos, and reports ([19]). Used to automate drafting of regulatory documents and other text, freeing scientists from copyediting chores ([21]). | Company-wide (Internal Communications, Regulatory, R&D support) | Mar 2025 (Business press) ([19]) | Business Insider / NAM (Merck CTO) ([19]) ([21]) |
| Eli Lilly | Broad encouragement of ChatGPT use – leadership told employees “you need to start bringing ChatGPT into your work” ([19]). Applied in research (small & large molecule design), generating clinical trial documentation and regulatory submission drafts ([22]). Also used for internal tasks like writing year-end performance reviews ([23]). | R&D (Drug Discovery), Regulatory Affairs, Internal Operations | Mar 2025 (Business press) ([22]) ([19]) | Business Insider (Lilly CIDO) ([22]) ([19]) |
| Novartis | Deployed an internal ChatGPT-powered assistant (branded “NovaGPT” per reports) for HR and communications. Used to draft HR policies, job descriptions and internal memos, saving considerable time ([24]) ([25]). Employees in HR use it to generate first drafts and improve the wording of communications. | Human Resources (People & Organization), Internal Comms | May 2024 (Interview) ([24]) ([25]) | Canadian HR Reporter (Novartis HR) ([24]) ([25]) |
| AstraZeneca | Developed “AZ ChatGPT”, an AI research assistant leveraging GPT models on AstraZeneca’s internal data. It answers scientists’ complex questions using proprietary biology/chemistry knowledge bases ([26]). Also piloting LLMs for executive reporting and competitive intelligence (drafting insights for leadership) ([26]) ([27]). | R&D (Research Knowledge Management); Executive/Analytics Reporting | Jul 2024 (Industry article) ([26]) ([28]) | Analytics India Magazine (AstraZeneca) ([26]) ([28]) |
| Sanofi | Ongoing collaboration with OpenAI to integrate generative AI in drug development. Launched an AI tool called “Muse” with OpenAI/Formation to speed up clinical trial patient recruitment ([29]). Sanofi’s CEO said LLMs offer “insane opportunity to…summarize and create” in R&D, e.g. designing candidate molecules or drafting FDA submission documents (first AI-drafted filings expected by end of 2024) ([30]) ([31]). | R&D (Drug Discovery), Clinical Development (Trials), Regulatory (Docs) | May 2024 (Press & CEO) ([30]) ([31]) | Fortune (CEO interview) ([30]) ([31]); Reuters ([32]) |
Sources: Company press releases and media reports as cited above.
Company Case Studies and Use Cases
Below, we delve deeper into how these organizations are leveraging ChatGPT or similar large language models (LLMs), and the business functions impacted. Each example illustrates distinct use cases – from research labs to corporate offices – highlighting the versatility of generative AI in life sciences.
Moderna: Enterprise-Wide AI Integration
Moderna, a Boston-based biotech, has emerged as a pioneer in ChatGPT adoption and continues to lead the industry in generative AI deployment. In early 2023, Moderna deployed its own internal instance of ChatGPT called "mChat", built on OpenAI's API ([33]). This tool quickly gained traction – over 80% of Moderna's employees began using mChat for day-to-day tasks, an adoption level that spurred a broader AI-first culture ([34]). By late 2023, Moderna upgraded to ChatGPT Enterprise and embedded 750+ AI assistants ("GPTs") across its business – remarkably, these 750 GPTs took only about two months to create ([35]). These assistants are custom-trained bots that function as virtual coworkers in different departments. Each user now averages 120 ChatGPT Enterprise conversations per week, demonstrating deep integration into daily workflows ([36]). For example, in R&D and clinical development, Moderna created a "Dose ID GPT" to help scientists analyze clinical trial data and choose optimal vaccine dosages. This GPT uses ChatGPT's Advanced Data Analytics capabilities to evaluate dose selection, provide rationale with citations, and even generate charts for the team. Such assistance accelerates the trial decision-making while ensuring human oversight for safety. Moderna's legal team notably boasts 100% adoption of ChatGPT Enterprise ([34]).
Crucially, Moderna didn't limit ChatGPT to the labs – they rolled it out company-wide, including in legal, manufacturing, and commercial functions. The corporate brand team has GPTs that help prepare slides for quarterly earnings calls and another GPT that converts biotech terminology into approachable language for investor communications ([34]). For instance, lawyers use GPT assistants to summarize contracts or regulations, while the manufacturing team uses them to troubleshoot process documents. Moderna's CEO Stéphane Bancel described these AI tools as "extensions of our team" that augment employees' roles through personalized support, noting: "With a few thousand employees, we are scaling like a company of one hundred thousand, thanks to AI" ([34]). By empowering staff with generative AI, Moderna aims to bring up to 15 new products to market in the next 5 years – from a vaccine against RSV to individualized cancer treatments ([37]). This bold integration, done in partnership with OpenAI, was publicly announced in April 2024. Notably, other biopharma companies including Amgen, Genmab, and e-therapeutics have since partnered with OpenAI in a similar fashion, with Amgen named as an early ChatGPT Enterprise customer ([38]). Moderna's case exemplifies how a life sciences company can safely scale ChatGPT across the organization to boost productivity in research and beyond.
Pfizer: “Charlie” – GPT-Powered Marketing Engine
Global pharma leader Pfizer (headquartered in New York) has taken a slightly different route by focusing ChatGPT on a specific domain: marketing and content. Pfizer’s internal generative AI platform, nicknamed “Charlie,” is designed to revolutionize the company’s content creation and review processes in its marketing and sales operations ([17]) ([15]). Rolling out since late 2023, Charlie was built with the help of an agency (Publicis Groupe) and is powered on the backend by a customized version of ChatGPT ([15]). It acts as a copywriting and proofreading assistant that can generate draft promotional content, suggest edits, and even flag compliance issues.
Bill Worple, Pfizer’s VP of customer engagement technology, explained that one goal is to “5x our content creation” for both healthcare provider (HCP) and patient communications ([39]). Charlie helps marketing teams quickly produce materials like digital ads, emails, webpages, and sales brochures. More importantly, it has in-built fact-checking and legal/compliance guardrails. For example, the system labels AI-generated content with a risk rating (red/yellow/green) to indicate how much regulatory review it may need ([40]). Reused claims or previously approved language are marked green for faster approval, whereas any novel claims get a red flag for thorough human scrutiny ([40]). By triaging content in this way, Pfizer hopes to speed up the traditionally slow review cycles in pharma marketing, without compromising on accuracy.
Notably, Charlie's content generation is backed by Pfizer's own data and knowledge. It can answer internal questions by drawing on Pfizer's repository of research reports, case studies, and performance data, thanks to natural language query features. To prevent the AI from "hallucinating" (making inaccurate statements), Pfizer configured Charlie such that all answers must be grounded in Pfizer's validated content sources. Employees can use Charlie through integrations with everyday tools – it's built into Pfizer's Adobe content management system (including Adobe Workfront and Experience Manager) and Slack for easy access ([41]). By 2025, hundreds of Pfizer marketers and external agency partners including Publicis Groupe and IPG were actively using Charlie, with deployment expanding across thousands of staff across various brands.
Charlie's ability to rapidly generate compliant marketing materials is enabled by drawing upon structured data and automated document generation capabilities from AI tools used in clinical and regulatory phases, creating a connected content pipeline across Pfizer's operations ([42]). Pfizer has also established robust AI governance under Lucy Muzzy, VP of compliance for AI, digital health, medicines, and M&A business, who personally drafted and implemented Pfizer's first policy governing the use and creation of AI. Pfizer is now described as the only company with clear, evidence-backed, and deeply integrated AI initiatives across all major functions: R&D, clinical development, manufacturing, and commercialization through the Charlie platform ([42]). This initiative illustrates ChatGPT's role in customer-facing applications: while not directly chatting with customers, it supercharges the teams that create Pfizer's messaging to doctors and patients. Pfizer's careful approach – a custom GPT instance with internal data governance – highlights how pharma companies can harness ChatGPT for creative and analytical work while managing regulatory risk.
Johnson & Johnson: Upskilling Employees and Delivering Measurable AI Value
Healthcare giant Johnson & Johnson took a holistic, workforce-driven approach to ChatGPT adoption that has yielded impressive measurable results. J&J focused on building AI proficiency among its 138,000 employees so they could safely leverage tools like ChatGPT in various contexts ([43]). By 2025, J&J's Chief Information Officer, Jim Swanson, reported "there are so many ways we've been using AI" across the company – from R&D to supply chain – but doing so required a concerted upskilling program ([44]). J&J created a generative AI training course that was made mandatory for any employee to be authorized to use the technology ([43]). As of 2025, 47,000 employees have taken the Generative AI course required for tool access, and more than 30,000 completions of courses in their "digital boot camp" training have been recorded ([45]). A 2024 training program also equipped 10,000 employees to handle sensitive data when using AI tools.
This rigorous enablement effort reflects J&J’s stance that AI literacy is now as important as traditional skills for its workforce ([46]) ([47]). After training, employees can access approved generative AI tools for tasks like summarizing documents, analyzing data, and drafting content within their roles ([48]). For example, a scientist might use ChatGPT (in a controlled environment) to summarize recent journal articles, or a supply chain analyst might use it to outline a report. By educating staff about data privacy and proper use, J&J mitigates the risks of ChatGPT (e.g. they emphasize not to input any sensitive information) while unlocking productivity gains. Essentially, J&J did the groundwork so that generative AI can be tapped safely at scale – an approach validated by the survey that showed training is often lacking elsewhere ([3]).
This doesn't mean J&J lacks specific use cases – in fact, they've achieved remarkable measurable results. J&J has narrowed its AI strategy after discovering that just 10% to 15% of its use cases account for 80% of the value delivered – a key insight from a three-year internal push that resulted in nearly 900 AI use cases ([49]). Impressively, J&J generated nearly $500 million in measurable business value from AI implementation while launching 900 generative AI projects across every major business function ([50]). J&J has also developed internal tools like AskJIA, which takes all their curated, medically validated, legally reviewed content around products to help train sales reps and improve their understanding of products ([45]).
In October 2025, Johnson & Johnson MedTech announced advancements in developing robotics systems with physical AI technologies that create simulated environments to accelerate future product innovation, optimize clinical workflows, and improve training for clinical teams ([51]). The MONARCH Platform for Urology will be commercially available in the U.S. in 2026 and uses AI-driven simulation where virtual operating room environments can be created to assist clinical teams in setting up robotic systems before starting procedures. J&J's experience underscores that cultural readiness and governance are key – they built a foundation so that tools like ChatGPT can be broadly used (for internal knowledge management, content generation, etc.) without compromising compliance. For IT professionals, this case highlights the importance of AI training programs alongside technology deployment.
Merck & Co.: GPTeal – Securing ChatGPT for Internal Use
Merck & Co. (known as MSD outside the U.S.) is a top-ten pharma based in New Jersey, and it adopted a strategy to enable ChatGPT within a gated, proprietary platform. Merck developed an internal tool called “GPTeal” – a playful nod to Merck’s signature teal brand color – which serves as a company-sanctioned ChatGPT interface ([19]). According to Merck’s CTO Ron Kim, GPTeal provides employees a safe way to use large language models like OpenAI’s ChatGPT, Meta’s LLaMA, and Anthropic’s Claude, “while keeping company data secure from external exposures.” ([19]) This means when a Merck staff member wants to ask a question or draft something with an AI assistant, they use GPTeal rather than the public ChatGPT website. GPTeal acts as a shield – it likely runs on a private cloud or uses OpenAI’s enterprise API, ensuring that no prompts or outputs are leaked to train the public model.
With GPTeal in place, Merck’s workforce has enthusiastically begun using generative AI for day-to-day productivity. Kim noted that employees regularly rely on it to draft emails, memos, and documentation ([21]). One impactful use case is in regulatory affairs and medical writing: Merck is using GPT assistance to generate first drafts of documents that must be submitted to health authorities (which are then reviewed and edited by humans) ([21]). By doing so, highly trained scientists and physicians at Merck can avoid spending time on rote copyediting tasks. “We felt like some of our scientists were taking time being copy editors,” Kim said – work that generative AI can handle, freeing researchers to focus on science ([52]). Importantly, Merck’s approach still enforces the rule that any AI-written content (like a draft clinical study report) is checked by Merck experts before use.
Merck's public statements in March 2025 confirm the success of this initiative ([43]). By then, GPTeal was accessible to over 50,000 Merck employees globally (roughly two-thirds of Merck's ~75,000 total staff) and had become a cornerstone of the company's digital workflow ([53]). The company supported upskilling through a mix of self-serve digital training courses, monthly webcasts focused on generative AI, and boot camps for software developers that could last anywhere from half a day to 10 days.
In June 2025, Merck announced a significant breakthrough with a new generative-AI-powered platform for clinical study reports (CSRs), developed in collaboration with McKinsey's QuantumBlack. This system reduced the time needed to produce first drafts of CSRs from two to three weeks to just three to four days ([54]). Early pilots showed the approach reduced CSR drafting hours from 180 to 80 while halving error rates in areas like data accuracy, citations, and terminology. Merck has also partnered with Amazon Web Services (AWS) to tackle the costly problem of false rejects in manufacturing, using AI and machine learning with Generative Adversarial Networks (GANs) to create synthetic images of product defects – successfully reducing Merck's false reject rate by 50% ([55]). This case illustrates how a pharma company can embrace ChatGPT by building a custom wrapper that addresses intellectual property and privacy concerns. For IT professionals, Merck's GPTeal is a model of deploying generative AI at scale: integrate multiple LLM models, control data flow, and then open it up for broad internal use to boost productivity in everything from R&D knowledge searches to preparing slide decks. Merck effectively turned a potential threat (unsanctioned ChatGPT use) into an asset by bringing the technology in-house under IT's oversight.
Eli Lilly: Embracing ChatGPT to Transform Workflows
Indianapolis-based Eli Lilly took one of the most open stances on ChatGPT among big pharma. While many peers banned or restricted generative AI, Lilly’s leadership “went in the exact opposite direction” after ChatGPT’s debut ([56]). Diogo Rau, Lilly’s Chief Information and Digital Officer, publicly encouraged all employees to experiment with ChatGPT, as long as they did so carefully (no sensitive data input) ([57]). “We told everybody you need to use it, you need to start bringing ChatGPT into your work,” Rau said, while also cautioning, “Don’t put anything in there that you don’t want to get out.” ([58]). This balanced message both promotes innovation and reinforces data security.
In practice, Lilly employees across divisions found creative ways to leverage generative AI. In drug discovery research, teams used AI to support the design of both small-molecule and large-molecule (biologic) drugs ([22]). Although details are scant, this likely means using GPT-based tools to digest scientific literature, propose molecule structures or targets, and hypothesize mechanisms – essentially acting as a brainstorming assistant in early R&D. Lilly also applied AI to automate documentation in clinical trials and regulatory submissions ([22]). For example, generating drafts of study protocols, informed consent forms, or sections of FDA submission packages can be done by ChatGPT in seconds, after which experts at Lilly refine them. This speeds up what are traditionally labor-intensive writing tasks in drug development.
On the internal operations side, Lilly tried some novel approaches to spark AI adoption. In summer 2024, they ran an “AI Games” competition, with challenges like using a chatbot to write a fun poem about the company or create a quiz on Lilly’s history ([59]). This gamified AI use and helped employees get comfortable with the technology. Later in 2024, Lilly even asked all employees and managers to use generative AI when writing their year-end performance reviews ([23]). The idea was that ChatGPT could help draft self-assessments or reviews, which managers could then personalize – turning a normally dreaded chore into a more efficient process. By 2025, Lilly planned to require all senior leaders to obtain an AI certification, ensuring top-down buy-in and knowledge ([23]).
Lilly's experience shows the cultural side of implementing ChatGPT: executives explicitly championed the tool, creating an environment where employees felt empowered to innovate with AI. As a result, usage spread organically in many directions – scientists using it for hypotheses, HR for drafting posts, etc. From an IT perspective, Lilly did still need guardrails (they presumably used a secure instance or at least monitored usage). But their key differentiator was treating ChatGPT as a skill to be learned and embraced company-wide, rather than a danger.
In September 2025, Lilly significantly expanded its AI ambitions by launching TuneLab, an AI/ML platform that provides biotech companies access to drug discovery models trained on years of Lilly's research data ([60]). Lilly estimates this first release of AI models includes proprietary data obtained at a cost of over $1 billion, representing one of the industry's most valuable datasets used to train an AI system available to biotechnology companies. TuneLab is powered by Lilly's full drug disposition, safety, and preclinical datasets representing experimental data from hundreds of thousands of unique molecules, and employs federated learning to enable biotechs to tap into Lilly's AI models without directly exposing their proprietary data. Partners like Circle Pharma and insitro have already joined the program.
In January 2026, NVIDIA and Eli Lilly announced a first-of-its-kind AI co-innovation lab focused on applying AI to tackle some of the most enduring challenges in the pharmaceutical industry ([61]). The two companies will invest up to $1 billion in talent, infrastructure and compute over five years to support the new lab. The collaboration will initially focus on creating a continuous learning system that tightly connects Lilly's agentic wet labs with computational dry labs, enabling 24/7 AI-assisted experimentation to support biologists and chemists. TuneLab will include NVIDIA Clara open foundation models for life sciences as part of a future workflow offering. Lilly's bold approach suggests that with proper guidance, allowing open use of generative AI can rapidly uncover value across diverse pharmaceutical functions.
Novartis: “Branded ChatGPT” for HR and Communications
Novartis, a Swiss-headquartered pharma with large U.S. operations, publicly shared an interesting niche use of ChatGPT: in its Human Resources and internal communications department. In an interview, Novartis’ Global People & Organization Leader for Canada described how the company deployed its own branded version of ChatGPT to assist with everyday HR writing tasks ([25]). This internal tool (sometimes referred to informally as “NovaGPT”) is used for “the most mundane and basic topics” in HR, allowing the team to offload those and save time ([25]). For example, when HR needed to draft a new policy document or a company-wide announcement, they would start by prompting the internal ChatGPT for a first draft ([24]). Similarly, writing job descriptions for new roles – a repetitive but important task – has been eased by using ChatGPT to generate an initial version that HR personnel can then tweak ([24]).
According to Novartis, this experiment paid immediate dividends in efficiency. Even if the chatbot’s draft wasn’t perfect, it provided a solid starting point. The HR leader noted that sometimes he might only take “two or three sentences” from the ChatGPT output that are particularly well-phrased, but those saved him significant effort and improved the overall quality of the communication ([62]). Over time, as the HR team learned to engineer better prompts, the outputs got more useful. Novartis indicated that the tool “elevates the quality of the message” by providing options and inspiration ([62]).
Importantly, Novartis integrated this ChatGPT capability internally, likely connecting it with their own data and templates. They mentioned it is "branded" for Novartis, which implies a custom interface or at least an approved internal version of the model ([63]). This would alleviate concerns about feeding confidential HR data into a public system.
Beyond HR, Novartis has significantly expanded its AI applications. The company is at the forefront of modernizing clinical trial processes by integrating AI to enhance trial feasibility and site selection ([64]). Novartis has developed AI algorithms capable of analyzing large datasets in minutes, identifying the most suitable sites for a trial based on a range of criteria – this accelerates the site selection process and enhances the quality of decisions. The company is also committed to deploying AI systems in a transparent and responsible way, as outlined in their responsible AI principles, ensuring that AI use has a clear purpose, is respectful of human rights, and is accurate, truthful, and not misleading. In the broader context, Novartis has been very active in AI for R&D, and this combination of HR and clinical trial use cases shows the breadth of ChatGPT's applicability. Even in a highly regulated industry, departments like HR, communications, or training can safely use generative AI on non-sensitive tasks (policy drafts, newsletters, FAQs, etc.) to improve speed and consistency. For IT teams, Novartis' approach could serve as a template: start with internal-facing functions and a limited, secure ChatGPT deployment to demonstrate value, which can build confidence before expanding AI to core scientific areas.
AstraZeneca: AI Research Assistant “AZ ChatGPT”
AstraZeneca (a UK/Swedish pharma with significant U.S. presence) has showcased its use of generative AI as a research and data analysis assistant. An article from mid-2024 detailed AstraZeneca’s in-house development of “AZ ChatGPT”, described as “an AI-powered research assistant.” ([26]) This tool interfaces with AstraZeneca’s vast internal knowledge repositories – containing decades of proprietary biology and chemistry data – to help scientists answer complex research questions ([26]). In essence, AZ ChatGPT is like an internal chatbot trained on all of AstraZeneca’s experimental results, scientific publications, and databases. A scientist can query it in natural language (for example, “What do we know about Target X in oncology?”), and it will retrieve and summarize relevant information from the company’s troves of data, far faster than a manual search. It can even provide prompts or suggestions for experimental approaches related to discovery and clinical inquiries ([26]).
One AstraZeneca director noted they are evaluating how such LLM capabilities can improve insight generation in executive reports for decision-makers ([65]). For instance, summarizing portfolio progress or competitive intelligence updates for senior management is being tested with AZ ChatGPT ([27]). This suggests the tool isn’t just for bench scientists but also for strategy and commercial teams who need distilled insights (e.g. a summary of competitor trial results or a high-level report on a therapeutic area). Additionally, AstraZeneca built a system called the Biological Insight Knowledge Graph (BIKG), which works in tandem with their AI efforts ([66]). BIKG uses machine learning to link biological data and help recommend research directions – for example, highlighting a promising drug-target relationship. When combined with AZ ChatGPT, these tools enable a powerful discovery engine: the knowledge graph finds patterns, and the chatbot interface allows scientists to query those patterns in plain language and get explanations.
Notably, AstraZeneca has been using Microsoft's Azure OpenAI Service to test the latest GPT-4 models within a secure environment. They report using a mix of in-house models and external ones, showing a robust, hybrid AI infrastructure. By 2025, AstraZeneca's leadership was confident enough in generative AI that they integrated it across the drug development pipeline – from target identification to clinical trial design and even in commercial analytics. The company's bold ambition is to become a fully "data-led enterprise," and generative AI like AZ ChatGPT is a cornerstone in that strategy.
In March 2025, AstraZeneca announced it was scaling up use of generative AI to help reach its 2030 ambitions of becoming an $80 billion company, delivering 20 new medicines, and being carbon negative ([67]). By April 2025, approximately 12,000 employees (from R&D scientists to marketing staff) had completed various genAI certifications. Learning modules and accreditation are offered at gold, silver, and bronze levels to encourage employees to become more aware of generative AI models like ChatGPT and how to use them ethically. Surveys found 85% of stakeholders expecting generative AI will increase their productivity at work, 93% saying it is having a positive impact on their work, and 86% that AI tools will help them achieve their goals.
AstraZeneca has also developed a "Development Assistant" – an interactive multi-agent AI system (using Amazon Bedrock) that enables researchers to query clinical trial data using natural language ([68]). The project achieved concept to production MVP in approximately six months and is now being expanded to additional domains. In one reported example, an AI radiomics platform reduced the manual image-processing time required for CT scans, and early clinician feedback on the protocol-writing assistant showed 4 of 5 medical writers rated the AI tool helpful for drafting the summary section of a protocol. For pharma IT professionals, AstraZeneca's case underlines the value of connecting LLMs to internal proprietary data. The real power emerges when ChatGPT is not just drawing from public knowledge but from the hidden insights in a company's own research – making it a cutting-edge digital assistant for scientists and executives alike.
Sanofi: Partnering with OpenAI for Drug Development
Sanofi, a French pharma active in the U.S. market, made headlines in 2024 by partnering directly with OpenAI to embed generative AI into its drug development processes. CEO Paul Hudson has been an outspoken champion of this approach, stating that “Large-language models give us this insane opportunity to suppress, summarize, and create” in ways that could radically improve R&D productivity ([30]). In May 2024, Sanofi announced a first-of-its-kind collaboration with OpenAI and a biotech startup (Formation Bio) aimed at building AI tools to accelerate clinical trials and drug discovery ([29]) ([32]). One product of this collaboration is an AI software called “Muse,” which focuses on speeding up patient recruitment for clinical trials – a traditionally slow part of drug development ([69]). By analyzing protocol criteria and real-world data, Muse (powered by GPT models) can help identify suitable patients much faster and even suggest ways to broaden eligibility, potentially reducing trial delays.
On November 12, 2024, Formation Bio, together with OpenAI and Sanofi, formally introduced Muse – described as an advanced AI-powered tool developed to accelerate and improve drug development by optimizing patient recruitment for clinical trials ([70]). Muse is designed to accelerate clinical trial recruitment by analyzing disease, patient demographics, and the competitive landscape, then identifying optimal patient profiles and recruitment strategies. It generates recruitment materials and pre-screening questionnaires tailored to specific patient subgroups and adaptable for various channels, languages, and styles – cutting the time for recruitment strategy and content creation to just minutes.
Sanofi is leading the initial deployment of Muse in Phase 3 studies for multiple sclerosis (MS), leveraging its deep experience in delivering innovative treatments ([71]). When combined with human experts in the loop, Muse reduces the risk of regulatory setbacks, promoting smoother, more efficient trial progression. AI safety and data privacy are embedded throughout Muse's development and deployment – it relies on research publications and other public and proprietary data sources, none of which contain personally identifiable information (PII).
Sanofi is also leveraging OpenAI's technology to sift through its massive datasets for discovery insights. The company granted OpenAI secure access to some of its proprietary research data in order to train specialized models for Sanofi's needs. This is a bold move that many pharma companies have been hesitant to take. Hudson believes this deep cooperation will help "design a drug candidate's molecular structure" and find the right patients for it more efficiently. For example, generative AI might propose novel molecule designs or predict which patient subgroups will respond best to a drug, tasks that traditionally involve a lot of trial and error. Sanofi's decision in late 2023 to forgo its 2025 profit margin target in order to double down on AI-powered R&D was a high-stakes strategic bet, signaling immense confidence from leadership that the long-term value created by AI initiatives will far outweigh the short-term impact on profitability ([72]). Automating the writing of regulatory documentation (such as preclinical summaries or clinical study reports) remains a priority that can save scientists countless hours.
By integrating OpenAI’s models so deeply, Sanofi aims to shave years off the drug development timeline. As Hudson highlighted, the cost of developing a drug (>$2–4 billion) with a high failure rate is something AI could help mitigate by predicting failures earlier ([73]). Sanofi’s initiative is essentially treating ChatGPT-like AI as an R&D accelerator – one that can learn from past pipelines and guide future ones. This approach, however, comes with heavy responsibility. Sanofi has had to implement an AI risk assessment framework and ensure compliance (they mention “responsible AI” and data privacy commitments in their communications ([74])). For the IT and data teams, a lot of work goes into securely connecting internal data with external AI. The payoff, Sanofi hopes, will be a step-change in how quickly they can bring new therapies to market. This case exemplifies a top-down, strategic investment in ChatGPT technology, treating it not just as a tool for productivity, but as a core component of future pharmaceutical innovation.
Common Use Cases for ChatGPT in Life Sciences
The company cases above reveal a wide spectrum of use cases for ChatGPT in the life sciences. Below is a summary of the key application areas and how different organizations are tackling them:
-
Drug Research & Discovery: Perhaps the most impactful area is using ChatGPT/LLMs to assist scientists in R&D. Companies like Pfizer and AstraZeneca employ generative AI as a research assistant – e.g. scanning vast libraries of publications and data to identify new drug targets or molecular designs ([75]) ([26]). In surveys, drug discovery was the #1 cited application of AI in pharma ([10]). Even smaller biotechs (e.g. Recursion Pharmaceuticals) integrate GPT-3/4 models into their discovery platforms ([76]). The benefit is faster hypothesis generation and knowledge synthesis; a chatbot can summarize the state of research on a protein in minutes, guiding scientists to promising avenues. Generative models are also beginning to propose chemical structures for drug candidates (part of AI-driven medicinal chemistry), essentially suggesting novel compounds to test ([PDF] “AI at Merck: a 360-degree perspective” How Merck enables and ...). This use case remains in early stages, but public comments by companies like Sanofi indicate strong optimism that LLMs can design or select better drug candidates ([30]).
-
Clinical Trial Optimization: Life sciences companies are leveraging ChatGPT to make clinical development more efficient. Patient recruitment is a prime example – Sanofi’s partnership developed an AI tool to find trial patients faster using generative AI ([29]). Trial design and protocol writing is another: AstraZeneca noted AI helps design smarter trials and inclusion criteria ([77]), and Moderna’s DoseID GPT analyzes trial dosing decisions ([13]). In a Nature report, scientists highlighted using AI to write first drafts of trial protocols and analyze data in real-time ([78]). By automating such tasks, companies can cut down the time it takes to start and run studies. The ZoomRx data also listed trial optimization among top use cases already in play ([10]). Additionally, generative AI can create “digital twins” or simulated patient data to model trial outcomes, which may reduce the number of real patients needed in control groups ([79]). Overall, ChatGPT is becoming a valuable co-pilot for clinical operations teams, helping with everything from drafting investigator brochures to interpreting complex results – always with human experts in the loop for final decisions.
-
Regulatory Affairs & Documentation: Pharma and biotech companies face heavy documentation workloads – regulatory submissions, compliance reports, literature reviews, etc. Generative AI is proving extremely useful here. Eli Lilly and Merck have used ChatGPT to generate drafts of clinical trial reports and sections of regulatory submissions ([22]) ([21]), which are then polished by experts. Sanofi expects AI to write first drafts of FDA documents for upcoming filings ([31]). This cuts down the tedious writing effort and ensures consistency. Companies are also using ChatGPT for pharmacovigilance writing – for example, Indegene (a pharma services firm) noted that ChatGPT can intake adverse event narratives and draft pharmacovigilance case summaries more efficiently ([80]). Internally, any report that needs to be written in a structured format (annual summaries, manufacturing deviation reports, safety assessments) can be accelerated with LLMs. Of course, all output is reviewed by regulatory professionals, but it can significantly accelerate compliance workflows. Given the strict standards of regulatory docs, firms typically use enterprise-secure instances of ChatGPT (as in Merck’s GPTeal or Moderna’s mChat) to ensure confidentiality while benefiting from AI-generated content.
-
Knowledge Management & Internal Support: Large life sciences organizations generate and consume enormous amounts of information. ChatGPT is increasingly used to organize, search, and summarize internal knowledge. For example, AstraZeneca’s AZ ChatGPT indexes internal R&D data so scientists can query it in plain language ([26]). Novartis and J&J employ ChatGPT for internal communications and knowledge sharing – drafting policy updates, summarizing training materials, or answering common employee queries ([24]) ([18]). Some companies have built chatbot assistants that act as an internal helpdesk for employees, leveraging company manuals and wikis. This spans IT helpdesks (answering how-to questions) to HR portals (answering policy questions). One report mentioned Johnson & Johnson’s vision of a “bilingual” employee fluent in both domain skills and AI tools ([46]) – in effect, encouraging staff to use ChatGPT as an on-demand mentor or tutor in their work. In Medical Affairs departments, generative AI can summarize medical literature or prepare educational slides for medical science liaisons, vastly speeding up preparation time. The key impact in this category is turning a company’s siloed data into a conversational knowledge base, increasing the agility of learning and decision-making.
-
Marketing, Sales & Customer Engagement: Pharma marketing and sales teams are using ChatGPT to create and manage content more efficiently, as seen with Pfizer’s Charlie platform for content creation and review ([81]) ([15]). Generative AI can produce drafts of marketing copy, social media posts, product FAQs, and even medical conference booth scripts. These drafts save time for marketing writers and ensure messaging consistency. Copywriting was identified as a common early use of AI in pharma ([10]). Moreover, ChatGPT can personalize communications – for instance, sales reps could use it to tailor follow-up emails to doctors based on conversation notes (while staying within compliance-approved language). Some life science companies also explore using chatbots for customer service or patient support. Although we have not seen a major pharma publicly deploy a ChatGPT-based patient chatbot yet (likely due to regulatory caution), the idea is on the table. Generative AI could answer patient questions about a medication or assist with reimbursement and access queries, acting as a first-line support (with proper disclaimers). Doximity, a physicians’ network, even integrated a ChatGPT assistant to help doctors draft patient correspondence ([82]) – hinting at future use in communications between pharma and healthcare providers. In summary, generative AI is streamlining how life sciences companies produce content and engage with stakeholders, augmenting human creativity and ensuring faster turnaround. It holds promise to maintain high-quality, compliant interactions at scale, whether those interactions are promotional, educational, or service-oriented.
-
Personalized Medicine & Data Analysis: Another emerging use case is in analyzing complex datasets for personalized medicine. For example, AI can help parse genetic information or electronic health record data to identify patients who might benefit from a therapy (this ties into the trial recruitment use case). The ZoomRx survey found personalized medicine to be a popular AI application area after drug discovery ([10]). Generative AI could assist in writing genomic reports or explaining to clinicians the rationale for a targeted therapy. Companies are also interested in using ChatGPT’s data analysis modes: ChatGPT Enterprise offers advanced data analysis (formerly Code Interpreter) which Moderna used in its Dose ID tool ([13]). This allows for making sense of large clinical or real-world datasets, identifying patterns and visualizing results, which can support both R&D and commercial analytics. In essence, wherever there is data and a need to derive narrative or insight from it, ChatGPT can help translate numbers into natural language. This supports teams in epidemiology, health economics, and outcomes research (HEOR), and other data-heavy disciplines in pharma.
Each of these use cases is accompanied by specific challenges – ensuring accuracy (no AI hallucinations in critical info), maintaining compliance (e.g. promotional content must stick to approved claims), protecting privacy (especially with patient data), and user trust and adoption. The early results, however, are promising. Many companies report substantial time savings and quality improvements. For instance, Merck saw significant reduction in time spent on drafting documents ([21]), and Pfizer noted faster content cycles and better focus of human reviewers on high-risk items ([40]). As generative AI tools mature (with domain-specific fine-tuning, audit logs, etc.), we can expect even wider uptake in life sciences.
Regulatory Developments: FDA and EMA AI Guidance (2025-2026)
As life sciences companies accelerate AI adoption, regulators have responded with new frameworks to ensure safe and effective use. On January 6, 2025, the U.S. FDA published its first comprehensive draft guidance titled "Considerations for the Use of Artificial Intelligence to Support Regulatory Decision Making for Drug and Biological Products" ([83]). This guidance provides a risk-based credibility assessment framework for evaluating AI models used in nonclinical, clinical, postmarketing, and manufacturing phases.
The FDA's seven-step framework emphasizes contextual risk evaluation, beginning with defining the fundamental question an AI model aims to address and establishing its specific context of use – whether predicting drug efficacy, optimizing manufacturing processes, assessing pharmacokinetic profiles, or supporting regulatory decision-making on drug quality and safety. Notably, the guidance does not cover AI use in drug discovery or for operational efficiencies that don't impact patient safety or drug quality.
In January 2026, the FDA and European Medicines Agency (EMA) jointly released "Guiding Principles of Good AI Practice in Drug Development," recommending ten key principles that lay the foundation for developing good practice when using AI ([84]). This transatlantic collaboration signals a harmonized approach to AI regulation in pharmaceuticals, which will help global companies implement consistent standards across markets.
The regulatory landscape is expected to continue evolving, with the EU's AI Act (classifying many healthcare AI tools as "high risk" requiring transparency, robustness, and oversight) expected to come into full force around 2026. For pharma companies, staying ahead of these regulations while maintaining innovation velocity will be critical.
Conclusion
Generative AI tools like ChatGPT are rapidly transforming the operational landscape for pharmaceutical, biotech, and diagnostics companies. In the U.S. and globally, life sciences organizations are moving past the hype and into practical deployments that improve how new drugs are discovered, developed, and delivered. We have identified several leading companies – from Moderna's enterprise-wide 750+ GPTs to Pfizer's marketing platform Charlie, Merck's GPTeal (now serving 50,000+ employees), Lilly's TuneLab and NVIDIA partnership (with $1 billion investment), and Sanofi's Muse tool for clinical trial recruitment – that openly attest to the value of ChatGPT in their workflows. These pioneers demonstrate that, when implemented thoughtfully, ChatGPT can boost productivity, enhance decision-making, and reduce cycle times across diverse functions like research, clinical trials, regulatory compliance, and customer engagement. Johnson & Johnson alone has generated nearly $500 million in measurable business value from AI implementation across 900 generative AI projects.
That said, the life sciences industry is approaching ChatGPT with justified caution. Common themes for success include establishing proper data safeguards (as seen in internal platforms like GPTeal and AZ ChatGPT), investing in employee training and AI literacy (J&J's 47,000 trained employees and AstraZeneca's 12,000 genAI certifications), and starting with "safe" applications (non-public or non-critical tasks) to build confidence. Regulatory compliance and patient safety remain the north stars – any AI-generated content is carefully reviewed by experts, and 80% of pharma companies now have dedicated AI governance structures in place. The survey data and early case studies indicate that concerns about data privacy have led many firms to initially restrict ChatGPT, but this is rapidly easing as enterprise-grade solutions and best practices emerge.
For IT professionals in pharma, the examples in this report offer valuable lessons. Key takeaways are: integrate ChatGPT into secure internal systems or sandboxes before scaling up; focus on high-impact use cases like research knowledge management or document drafting where AI can save significant time (Merck's CSR platform reduced drafting time from 2-3 weeks to 3-4 days); involve cross-functional teams (IT, legal, compliance, business units) to set guidelines; and upskill your workforce to confidently use these new tools. It's also important to measure outcomes – J&J's discovery that 10-15% of use cases drive 80% of value highlights the importance of prioritization.
Looking to 2026 and Beyond: Industry leaders like Mirit Eldor of Elsevier declare 2026 as the true "year of the agent" – while agentic AI was discussed heavily in 2025, it is only now making a measurable difference in R&D processes ([6]). The market for AI in pharmaceuticals is projected to grow from $1.94B in 2025 to approximately $16.49B by 2034. AI has been used to develop or repurpose more than 3,000 drugs as of late 2024, with 200+ AI-enabled drugs in development and 15-20 expected to enter pivotal trials in 2026 ([85]).
In an industry defined by innovation, generative AI has become a competitive differentiator. Companies that harness ChatGPT and similar models effectively are achieving faster R&D pipelines, better engagement with healthcare providers, and more agile operations, ultimately bringing medicines to patients sooner and at lower cost. The U.S. life sciences market, with its large scale and investment in digital, is at the forefront of this generative AI wave. The transformation is well underway – and those who responsibly leverage ChatGPT's capabilities are leading in the new era of AI-powered life sciences.
References: The information in this report is drawn from public sources including company press releases, interviews, and reputable media coverage. Key sources have been cited throughout, including: official statements from Moderna ([34]), Pfizer's marketing AI report ([86]), insights on J&J, Merck, Lilly ([43]), Sanofi and Muse ([70]), Eli Lilly TuneLab ([60]), the NVIDIA-Lilly partnership ([61]), Merck's CSR platform ([54]), and FDA regulatory guidance ([83]). These citations provide direct evidence of each company's use of ChatGPT or generative AI as discussed. The reader is encouraged to explore those sources for further details on specific implementations.
External Sources (86)
DISCLAIMER
The information contained in this document is provided for educational and informational purposes only. We make no representations or warranties of any kind, express or implied, about the completeness, accuracy, reliability, suitability, or availability of the information contained herein. Any reliance you place on such information is strictly at your own risk. In no event will IntuitionLabs.ai or its representatives be liable for any loss or damage including without limitation, indirect or consequential loss or damage, or any loss or damage whatsoever arising from the use of information presented in this document. This document may contain content generated with the assistance of artificial intelligence technologies. AI-generated content may contain errors, omissions, or inaccuracies. Readers are advised to independently verify any critical information before acting upon it. All product names, logos, brands, trademarks, and registered trademarks mentioned in this document are the property of their respective owners. All company, product, and service names used in this document are for identification purposes only. Use of these names, logos, trademarks, and brands does not imply endorsement by the respective trademark holders. IntuitionLabs.ai is an AI software development company specializing in helping life-science companies implement and leverage artificial intelligence solutions. Founded in 2023 by Adrien Laurent and based in San Jose, California. This document does not constitute professional or legal advice. For specific guidance related to your business needs, please consult with appropriate qualified professionals.
Related Articles

AI and the Future of Regulatory Affairs in the U.S. Pharmaceutical Industry
An analysis of how artificial intelligence is transforming regulatory affairs in pharmaceuticals, from submission preparation to compliance monitoring and regulatory intelligence.

Accelerating Drug Development with AI in the U.S. Pharmaceutical Industry
An exploration of how artificial intelligence is revolutionizing drug development processes, from target identification to clinical trials, with focus on implementation strategies and success metrics.

Leading Clinical Research Management Systems (CRMS) in the U.S.
An in-depth analysis of top Clinical Research Management Systems in the United States, comparing features, benefits, and implementation strategies for pharmaceutical companies.