Regulatory-Grade RWE Platforms for FDA & EMA Submissions

Executive Summary
The use of Real-World Evidence (RWE) – derived from routinely collected Real-World Data (RWD) such as electronic health records (EHRs), insurance claims, patient registries, and mobile health data – has surged as a complement and, in some cases, an alternative to conventional randomized controlled trials (RCTs) in drug development and regulatory decision-making. In both the United States and Europe, regulatory agencies (FDA and EMA) have formally embraced RWE under legislative and policy initiatives (e.g. the U.S. 21st Century Cures Act and EMA’s Big Data strategy) and launched dedicated programs to build “regulatory-grade” RWE platforms and standards ([1]) ([2]).
This report provides a comprehensive analysis of real-world evidence platforms tailored for the pharmaceutical industry to generate RWE of sufficient quality and rigor for FDA and EMA submissions. We review the historical context and drivers (including statutory mandates and technological advances), survey key global platforms and data networks (government, academic, and commercial), and examine case studies of RWE successfully incorporated into regulatory filings. Technical aspects are explored in depth: data sources, common data models (e.g. OMOP), study designs (cohort studies, external control arms), analytical methods to control bias (propensity scores, target-trial emulations), and informatics tools. We analyze evidentiary standards for “regulatory-grade” RWE – including data quality, transparency, and adherence to good pharmacoepidemiology practices – and discuss how platforms and processes are evolving to meet these requirements. Finally, we discuss implications for stakeholders and the future of RWE, including the integration of AI and global convergence of regulatory frameworks.
Throughout, we draw on the latest research findings and regulatory publications. For instance, recent analyses show that 23–27% of FDA drug-labeling expansions (2022–2024) included RWE, mostly from retrospective EHR cohort studies ([3]), and EMA networks like DARWIN EU now cover ~180 million European patients ([4]). We also summarize concrete regulatory cases: Voxzogo (achondroplasia) was approved with an external RWD control arm from a patient registry ([5]), and Nulibry (molbogine for MoCD) used RWD for both treatment and control arms ([6]). By synthesizing multiple perspectives – regulatory, industry, methodological, and “on-the-ground” examples – this report offers a deep, practical guide for pharma companies and data providers building and using RWE platforms aimed at FDA/EMA submissions.
Introduction
Background: RWD and RWE in Healthcare
Real-World Data (RWD) are broadly defined as data relating to patient health status and/or the delivery of healthcare, collected in routine clinical practice or other non-experimental settings ([7]). Common sources include electronic health records (EHRs), insurance claims databases, patient registries, disease registries, lab results, and even mobile health/wearable device data. These data contrast with the controlled data of RCTs: they may be less structured, observational, and subject to missingness and confounding. The FDA’s framework defines RWD as data “routinely collected” from a variety of sources ([7]).
When RWD are analyzed to generate clinical insights about a medical product – its use, benefits, or risks – the results are termed Real-World Evidence (RWE) ([7]). RWE can inform many questions: population-level effectiveness, safety surveillance, comparative outcomes, and even serve as external controls or supplementary evidence in support of product approvals. For example, national death registries linked to clinical trials have been used to derive RWD endpoints (e.g. mortality) ([8]).
Historically, regulators have long used RWD under-the-hood (e.g. FDA’s spinoff Sentinel Initiative began as a claims-monitoring system in 2008 ([9])). Over the last decade, however, interest in purposefully leveraging RWD/RWE for regulatory submissions has intensified. Legislative acts (like the US 21st Century Cures Act, 2016) explicitly directed agencies to evaluate RWE for new indication approvals and post-approval studies ([10]) ([1]). Key industry drivers include the high cost and time of RCTs (RCTs often consume >50% of a drug’s development budget ([11])), the need for evidence in patient subgroups or rare diseases where RCTs may be infeasible, and technological advances making large-scale data analysis possible. A 2017 Deloitte survey found over half of life sciences companies actively building RWE capabilities ([12]). RWE can accelerate product development by providing faster, more diverse evidence and by guiding trial design with historical benchmarks (so-called “target trial emulation” ([13])).
Figure 1 below contrasts key features of RWD/RWE versus RCT data. Importantly, neither is a universal replacement for the other; instead, they are increasingly seen as complementary. RCTs remain the gold standard for causal efficacy under controlled conditions, while RWE offers external generalizability and a broader evidence base, especially for safety and rare-event detection ([14]) ([15]).
Figure 1. Characteristics of RCT vs. RWD/RWE. While RCTs provide high internal validity via randomization and controlled protocols, their narrow inclusion criteria can limit generalizability. RWD offers large, heterogeneous real-world populations (including often underrepresented groups) ([16]) ([17]) but requires rigorous design and adjustments (e.g. propensity scoring) to mitigate biases. Regulatory-grade RWE must therefore ensure data quality and study rigour comparable to RCT standards ([18]) ([19]).
Definitions and Scope
For clarity, in this report we adopt standard definitions from the FDA and regulatory literature: RWD refers to routinely-collected health data (EHR, claims, labs, registries, etc.), and RWE refers to evidence obtained by analyzing RWD ([7]). (Some reports use “Real-World Evidence” and “Real-World Data” interchangeably, but we distinguish them.) We focus primarily on pharmaceutical product development (drugs and biologics), though many principles apply to devices and diagnostics as well. “Regulatory-grade RWE” in our context implies evidence built under rigorous protocols, with attention to data provenance and analytic transparency, such that it meets FDA/EMAevidentiary expectations.
This report covers multiple aspects: historical perspective; current regulatory frameworks (FDA, EMA, etc.); technical infrastructure (data platforms, models, tools); methodological standards; and case studies of RWE in action. Each claim or fact is documented with peer-reviewed or regulatory sources. We include tables summarizing representative platforms and regulatory cases, and adopt an academic tone suitable for industry professionals, regulators, and policymakers.
Regulatory Landscape for RWE
U.S. FDA Initiatives
Legislative and Policy Foundations
The U.S. Food and Drug Administration has long used RWD in safety surveillance (post-market) and to inform certain decisions ([20]) ([9]). However, a seminal moment was the 21st Century Cures Act of 2016, which mandated that FDA establish an RWE program ([10]). The law explicitly directed FDA to evaluate whether real-world evidence could support new indications for approved drugs or satisfy post-approval study requirements, noting that evidentiary standards should remain “substantial evidence” ([10]). In response, the FDA in Dec 2018 released the first framework document for its RWE program ([10]), outlining demonstration projects, guidance development, and stakeholder engagement.
The FDA further formalized its approach in guidance and regulations. Notably, in 2023 the FDA issued a final Guidance for Industry on Real-World Data and RWE (August 2023) ([21]), which updates prior draft guidance. This document emphasizes key considerations: pre-specification of study protocols, data quality and fitness-for-purpose, transparency of methods, and alignment with well-controlled study principles. Other FDA guidances have addressed related aspects, such as data standards for submissions containing RWD ([2]) and the suitability of specific data types (EHRs, claims) for regulatory use (in 2024) ([2]).
FDA Frameworks and Programs
The FDA’s RWE Program Guidance Document (2018) laid out definitions and principles: RWD are “data relating to patient health status and/or the delivery of health care” from various sources, and RWE is clinical evidence derived from its analysis ([22]). Thus, common sources include EHRs, claims, lab results, patient registries, and even digital device data ([7]). The framework emphasized that existing regulatory standards (e.g. substantial evidence of effectiveness) still apply to RWE because such studies must be “adequate and well-controlled” if used to establish efficacy ([18]).
FDA has also invested directly in data platforms. The FDA Sentinel Initiative (established 2008) is a distributed data network originally built for active safety surveillance. Sentinel’s RWE Data Enterprise (RWE-DE) project (2024) is expanding this by linking EHR to claims for at least 10 million patients, aiming to create a “query-ready, quality-checked” RWD network for regulatory use ([23]). Similarly, the FDA-Catalyst program (pivoting from Sentinel) is exploring how to conduct pragmatic clinical studies in health systems. The ACTT program (COVID-19 Therapeutics Accelerator) and other FDAsponsored demonstration projects have used EHR networks for active research.
In late 2024, FDA’s Center for Drug Evaluation and Research (CDER) announced a Real-World Evidence Innovation Center ([2]). This new center (announced Dec 13, 2024) is charged with coordinating RWD initiatives, advanced analytics, and partnerships to scale regulatory use of RWE ([2]). FDA also modernizes internally: adopting data standards (FDA’s Data Standards Catalog now includes RWD elements) and guidelines like ICH E9(R1) (“estimands” in trials, applicable to RWE designs).
FDA officials have publicly addressed misconceptions and encouraged high-quality RWE. For example, an FDA senior leader clarified that FDA expects prespecified protocols, robust design, and understanding of biases whenever RWD is used in submissions. The FDA published on its website dozens of examples (“FDA use of RWE in regulatory decision-making” ([20])) to illustrate how RWE has been applied (approvals, label changes, communications). Cases range from using single-arm trials with external controls to supplementing RCTs, to stand-alone RWD studies serving as confirmatory evidence (see Table 1 below).
Illustrative U.S. RWE Use Cases
Real instances demonstrate FDA’s acceptance of RWE under strict conditions. For example, in Feb 2021 the drug Nulibry (fosdenopterin) for a rare metabolic disease (molybdenum cofactor deficiency) was approved on a pivotal trial supplemented entirely by RWE: patients in an expanded-access program served as the treatment cohort, while an external natural history registry provided the control arm ([6]). The FDA review explicitly noted that the RWD (medical records from 15 countries) furnished the primary endpoint (overall survival), and the combined evidence was deemed “adequate and well-controlled” ([6]).
Another instance is Voxzogo (vosoritide) for growth in achondroplasia. FDA approved it in 2021 using patient-level data from the Achondroplasia Natural History (AchNH) registry as an external control for a single-arm open-label trial ([5]). The rationale was that, since growth would not occur without active treatment, comparing treated patients to registry growth data could estimate efficacy. The FDA termed this “confirmatory evidence.”
Pediatric uses have also leveraged RWE. In 2023, Vimpat (lacosamide) was approved for younger children using safety/efficacy extrapolation: the sponsor conducted a retrospective cohort study in the PEDSnet EHR database to provide needed safety data for a new regimen ([24]). Similarly, for Prolia (denosumab), an FDA safety review of pediatric kidney disease patients relied on a Medicare claims retrospective study identifying a hypocalcemia risk, leading to a boxed warning change ([25]). In all these examples, RWD sources (clinics, registries, claims) were subjected to rigorous analysis, and the FDA review team explicitly described the role of the RWE (e.g. as confirmatory or safety evidence) ([6]) ([25]).
Table 1 (below) summarizes select case studies illustrating how RWD have been integrated into US regulatory submissions.
EMA and European Regulatory Initiatives
EMA’s Big Data Strategy and DARWIN
The European Medicines Agency (EMA) and the broader European regulatory network have vigorously pursued RWE adoption. Early on, a Joint HMA-EMA Big Data Task Force (2017–2019) issued recommendations on using “big data” (RWD) for regulation. In 2020, the EMA published its own Regulatory Science Strategy 2025 which emphasized fostering RWD use and data literacy. Crucially, EMA established DARWIN EU (Data Analysis and Real World Interrogation Network) – a public infrastructure linking distributed RWD sources across Europe. DARWIN (fully operational from 2023) coordinates queries to national healthcare databases, registries, and other sources to generate RWE for EMA committees ([4]). As of mid-2025, DARWIN EU had expanded from 20 to 30 data partners, reaching about 180 million patients across 16 European countries ([4]). This network makes available federated analyses on safety events, effectiveness, epidemiology, etc., at the request of regulators or public health bodies.
EMA also maintains a catalog of RWD sources and studies to enhance transparency. A July 2025 ENCePP news highlighted that 59 new multi-database RWD studies were conducted for regulatory use (Feb 2024–Feb 2025) – a 47% increase from the previous year ([4]). These studies addressed risk topics (e.g. vaccines, rare ADME interactions) and underscored regulators’ proactive engagement in RWE generation. The EMA’s Big Data Steering Group (replacing the earlier task force) regularly convenes stakeholders to advance data standards and methods across the EU ([26]).
EMA Guidelines and Frameworks
While the EMA does not have a single omnibus “RWE guidance” like the FDA’s, it has several precursors. There are reflection papers and draft guidelines on aspects of RWD use: for example, a draft guideline on registry data for disease treatment, and plans for guidance on RWE approaches. EMA’s public information pages note that “scientific guidelines on real-world evidence aim to support use of RWD in regulatory decision-making” ([27]). For instance, EMA (and its Committee for Medicinal Products for Human Use) has acknowledged that RWD can support both pre-authorisation and post-authorization assessments (e.g. for extensions or safety monitoring) ([27]). However, the agency stresses that more experience and methods work are needed, and urges pharmaceutical sponsors to engage on study design.
Importantly, in 2020 the EMA-EunetHTA OPTIMAL framework (a European Health Technology Assessment report) investigated how RWE can underpin regulatory and HTA decisions. The EMA highlighted examples such as oncology treatments with external controls. Today, EMA tends to evaluate RWE on a case-by-case basis. For example, any RWE submitted in a Marketing Authorization Application (MAA) or line extension must adhere to standards akin to ICH guidance on study design (as the FDA does). However, with new European Medicines Regulatory Network strategy to 2028, EMA commits to further integrating RWE via DARWIN, methodochemical guidance, and collaboration with Health Technology Assessment bodies ([4]).
European Case Studies
Publicly available EMA examples of RWE use (beyond the DARWIN-generated studies) are fewer. However, some EU approvals have cited RWE. For instance, the European Public Assessment Report (EPAR) for cosentyx (secukinumab) in pediatric psoriasis (2021) notes that evidence from a pediatric registry supported approval. The EMA’s annual drug safety reports mention when patient chart reviews or registries complement trial data (e.g. in rare disease expansions). One analysis of EMA-authorised oncology drugs (2018–2022) noted that RWE was rarely the primary basis for efficacy decisions, but more frequently appeared in post-approval assessments or to address broader patient applicability ([28]).
The role of RWE in EMA decisions continues to evolve. On July 1, 2025, EMA’s dashboard celebrated a 47% year-over-year increase in internal RWD studies, thanks in part to DARWIN’s expanded reach ([4]). Notable studies included assessments of rare vaccination risks (GLP-1 receptor agonists and suicidality) and disease epidemiology (mpox vaccine effectiveness, RSV trends) ([4]). These examples show RWE informing public-health policy and regulatory monitoring across Member States.
Global and Multilateral Trends
Other regulators are also engaging RWE. Japan’s PMDA and Health Canada have piloted RWE frameworks for supplementary approvals. Notably, an October 2023 article reported that FDA, EMA, PMDA (Japan), and China’s NMPA found convergence on core principles like data quality and interoperability ([29]). The IMI (Innovative Medicines Initiative) in Europe supports consortia (e.g. EHDEN) to harmonize RWD via the OMOP common model, anticipating linkage to DARWIN. The OECD and ICH (International Council for Harmonisation) have started dialogues on RWE guidelines.
In summary, regulatory bodies increasingly accept that fit-for-purpose RWE – derived from well-characterized platforms and networks – can play a role in approvals and safety. Both the FDA and EMA emphasize that rigorous study design remains paramount: RWE must align with good science (adequate controls, defined endpoints, minimized bias) ([30]) ([19]). What has changed is the ecosystem: billions of health records and analytical tools are now mobilized under regulatory-directed initiatives to produce the “regulatory-grade” evidence needed for decision-making.
Real-World Data Platforms and Networks
To generate regulatory-grade RWE, organizations rely on specialized platforms and networks that aggregate, standardize, and analyze RWD. These platforms span government-run surveillance systems, academic research networks, and commercial data providers. We categorize them as follows:
- Government/Regulatory Networks: Initiated or overseen by public agencies to support monitoring and research.
- Academic Collaboratives: Research consortia built around shared data models or federated queries.
- Commercial Data Platforms: Industry solutions providing access to curated RWD (and analytics software).
- Integrated Health Systems: Large hospital/EHR networks participating in RWE studies or direct partnerships.
Below we discuss selected examples of each category, focusing on their data scope and relevance for regulatory scenarios.
Government and Regulatory Data Networks
FDA Sentinel: The FDA’s Sentinel System is a flagship RWE infrastructure. Initially launched to detect drug safety signals, Sentinel now encompasses automated queries of claims and EHR data from multiple partners (health plans, HMOs). By 2020, Sentinel covered data on over 100 million Americans. The new Sentinel RWE Data Enterprise (RWE-DE) project aims to enhance Sentinel with fully de-identified EHR data linked to claims for ~10 million patients ([23]). Sentinel has been directly used for FDA assessments: e.g. retrospective cohort analyses on drug safety in children ([31]). Its distributed model (data partners run local analyses per query protocols) preserves privacy while enabling regulatory studies. Sentinel’s analytic toolkit (AQUA, common data model, statistical code libraries) is continuously developed by FDA’s Sentinel Innovation Center.
DARWIN EU: As noted, DARWIN EU (Data Analysis and Real World Interrogation Network) is EMA’s flagship network for evidence on performance and safety of medicines in Europe. It federates queries across national/regional databases, effectively acting as a pharmacovigilance and outcomes research platform. Participating sources include EHRs, hospital records, claims, disease registries across EU member states. Through a central coordination center, DARWIN can issue rapid studies (e.g., emergent vaccine safety) or planned analyses. Its expansion to 30 partners now reaches ~180M European patients ([4]), making it a uniquely large RWD resource for regulators.
Other National Systems: Many countries maintain health-data networks. For example, the U.S. PCORnet (Patient-Centered Outcomes Research Network) is funded by PCORI and AHRQ: it connects ~47 million US patients from diverse healthcare networks ([32]). PCORnet uses a Common Data Model and supports comparative effectiveness and safety studies. European health services like the UK’s CPRD (primary care database) or Nordic national registries are often tapped through collaborative studies. While not FDA/EMA-run, pharmaceutical sponsors frequently engage with these networks under regulated study protocols.
Academic and Consortium Platforms
OHDSI: The Observational Health Data Sciences and Informatics (OHDSI) initiative is an open-science global community that uses the OMOP Common Data Model to harmonize health databases ([13]). OHDSI’s federated approach enables investigators to run standardized queries on many databases (hospitals, claims, registries worldwide). Collectively, OHDSI sites cover hundreds of millions of patient records (estimates vary, but typically >500M globally). For example, OHDSI conducted multi-continental studies on COVID-19 therapies. OHDSI’s relevance lies in its open tools and validation efforts, which align with regulatory needs for transparent, reproducible RWE (e.g., OHDSI released comparative effectiveness analyses replicating RCT results).
EHDEN: The European Health Data & Evidence Network (EHDEN) is a large IMI-funded project (now a foundation) to convert disparate European health data into the OMOP model. It aims to federate up to 35 countries’ data for research. EHDEN underpins DARWIN by enabling data partners to participate (via common modeling). While EHDEN itself is not a ‘platform’ with an interface, it has built the network of harmonized data sites and created tooling for federated queries.
PCORnet: As noted, PCORnet is a U.S. network of Clinical Research Networks (hospitals, clinics) and Patient-Powered Research Networks, all using a shared CDM. It has broad longitudinal data on patients, and supports CER studies. For RWE platform purposes, PCORnet’s strength is its diversity (multiple health systems, claims/EHR linkage) and its existing governance (HIPAA, IRB) already in place for research, making it attractive for corporate collaborations.
Commercial RWE Platforms
The private sector offers numerous RWE platforms combining data curation, analytic tools, and consulting. Key examples include:
-
TriNetX: A global federated health research network owned by Roche. It links de-identified EHR data from hundreds of health organizations in >20 countries ([33]) . As of Dec 2025, TriNetX encompassed ~200 million patients across 170 institutions ([33]) (clinics, hospitals). The platform provides real-time query tools (via a web interface) to design cohorts, do propensity matching, and generate response curves. TriNetX emphasizes its ability to replicate RCT findings in “real-world settings” and supports external control arm construction. (For example, TriNetX published a case where its platform validated outcomes of a cardiovascular RCT [9].) The platform’s data updates continually, making it amenable to exploratory analysis and feasibility assessments.
-
Aetion Evidence Platform: Aetion offers an analytics platform focused on automating observational research. It can link claims, EHR, registry and other data sources. Notably, in 2021 FDA selected Aetion (a partnership with a commercial data vendor) to develop an RWE framework for COVID-19 treatments ([34]), reflecting agency endorsement. Aetion’s tools enforce study protocols and quality checks, aiming to meet “regulatory science” needs. The platform has been used by pharma companies for submission-grade studies. (Full details of patient counts are proprietary, but its business model often involves data contracts with insurers and networks.)
-
Flatiron Health: Acquired by Roche, Flatiron maintains a large US oncology database tied to EHRs from cancer clinics. Flatiron’s network covers well over 3 million cancer patients (as of 2023) with detailed clinical notes. It provides analytics for oncology drug development and post-market studies. Its data have been used in research on treatments like immunotherapies and to support evidence for label expansions. (For example, Roche’s TECENTRIQ lung cancer label was influenced by a Flatiron/Penn external-control study.)
-
IQVIA and SAS: These life-science service firms offer integrated RWD solutions. IQVIA curates worldwide EHR and claims datasets (e.g., commercially-insured US claims, Medicaid/Medicare, Asian data, etc.) amounting to billions of records. It performs epidemiology analyses and can support submissions. SAS (Analytics software) has a data repository and tooling for pharmacoepi. These vendors often work behind the scenes on specific projects rather than open-platform style.
-
Others: Medidata/Rave (Dassault) provides RWD connectors to its clinical trial platform, Syapse focuses on oncology and rare disease registries, Observis holds Austrian health data in OMOP, etc. Newer entrants (e.g. Clarivate with TriNetX collaboration) continue to emerge.
Table 1 below summarizes key RWE data platforms and networks that are pertinent to regulatory RWE generation.
| Platform / Network | Type | Data Sources | Geographic Scope | Approx. Size / Notes |
|---|---|---|---|---|
| TriNetX | Commercial (Roche) | De-identified EHRs (clinical records) | Global (>20 countries) | ~200 million patients (170 HCOs) ([33]) |
| DARWIN EU | Regulatory (EMA) | National EHRs, claims, registries | Europe (16 countries) | ~180 million patients (30+ data partners) ([4]) |
| FDA Sentinel | Regulatory | Claims data (Medicare, private) | U.S. | 100+ million U.S. lives (distributed network) |
| PCORnet | Academic/Consortia | EHRs + limited claims | U.S. (network) | ~47 million patients across 70 sites ([32]) |
| OHDSI/OMOP | Academic (open) | EHRs, claims, registries (mapped) | Global (numerous DBs) | Collective size many hundreds of millions (distributed) |
| Aetion Evidence Platform | Commercial | Claims/EHR/registeries (via providers) | U.S., global datasets* | Used by FDA for COVID-19 RWE demo ([34]); scale depends on linked contracts |
| Flatiron Health | Commercial (Roche) | Onco EHR data, registries | U.S. (specialty clinics) | Millions of cancer patients |
| IQVIA | Commercial | Claims, EHR, Rx, lab data | Global | Billions of records (market claims/EHR) |
| EHDEN | Academic/Consortia | :OMOP-mapped EHR/claims (federated) | Europe | Hundreds of data partners (OMOP conversion under way) |
| Local health networks | Mixed | Hospital/EHR system data | Varies (e.g. national) | e.g. NHS Digital (UK), Kaiser Permanente (US/Puerto Rico) |
*Notes: Aetion partners with data providers (e.g. Ciox, claims vendors). The platform itself is FDA-validated software ([34]). Flatiron’s data reflect U.S. oncology practices, while IQVIA and others license regional datasets.
This table is illustrative, not exhaustive. Each platform offers different capabilities: for example, federated networks (TriNetX, PCORnet) allow rapid cohort discovery and real-time estimates; centralized consortia (OHDSI) allow complex multi-database analytics; rich disease registries (Flatiron) provide depth of phenotyping. Platform selection depends on study needs (e.g. rare diseases vs broad outcome surveillance) and on data accessibility (licensing, location).
Data Standards and Technology
Underlying many platforms is the use of common data models (CDMs) and standardized terminologies. Examples include the OMOP CDM (used by OHDSI, TriNetX, EHDEN) and PCORnet CDM. These ensure data from diverse sources can be queried with the same codebook (diagnoses coded in ICD/MedDRA, procedures in CPT, LOINC for labs, etc.). FDA’s Sentinel uses the Sentinel CDM, which is a precursor to OMOP. Initiatives also work on FHIR (Fast Healthcare Interoperability Resources) standards for health events, which agencies encourage for submitting EHR data. Data harmonization is vital: without it, cross-database studies (such as those run on DARWIN or OHDSI) would be infeasible.
To achieve “regulatory-grade” RWE, platforms additionally incorporate quality-checking and provenance tracking. For instance, data undergo cleaning (e.g. mapping free-text to codes) and validation (flagging improbable values). Platforms often exclude anonymized patient identifiers and apply governance controls. Analysis tools may include causal inference libraries (marginal structural models, g-methods) and enable target-trial emulation – explicitly framing an RWE study as if it were a hypothetical RCT, which many methodologists recommend ([13]). Advanced platforms are starting to integrate AI: for example, natural language processing on clinical notes, or machine learning to predict missing values. Such AI is used to extract phenotypes and endpoints (e.g. identifying cancer stage from records). However, regulators have stressed the need for transparency and validation of AI methods in RWE contexts.
Overall, RWD platforms provide the infrastructure – from data ingestion to analysis pipelines – that allows industry sponsors to generate evidence in an evidence-ready manner. In the case of on-demand networks like TriNetX, investigators can conduct feasibility queries instantly (e.g. what percent of diabetic patients meet trial criteria?). More complex studies (e.g. a 12-month matched cohort analysis) typically require a project within a platform that follows pre-agreed protocols, often with third-party audit trails. Building and validating such platforms is an ongoing effort: for instance, FDA and partners continue to test synthetic control arms to ensure methods yield unbiased results.
Building Regulatory-Grade RWE
“Regulatory-grade” implies an evidence standard suitable for FDA/EMA evaluation. In practice, this means RWE must be generated under principles akin to those for RCTs, where applicable, and follow good research practices. Here we detail the components: data quality, study design, analysis, and governance.
Data Quality and Fitness-for-Purpose
The cornerstone of reliability is data quality. RWD sources vary widely in completeness and accuracy. For example, claims data reliably capture diagnoses related to billing, but lack detailed clinical labs; EHRs include rich clinical detail but may have heterogeneous coding and missingness. Registries may systematically collect certain outcomes but lack denominators. Each source has biases (e.g. claims include only insured patients). Regulatory agencies expect sponsors to rigorously assess fitness-for-purpose: the data must have relevant captured variables and sufficient follow-up to answer the question.
Best practices include data cleaning (e.g. standardizing units, resolving duplicates), curation to a CDM, and documentation of data provenance (when and how data were collected). Platforms may implement data quality frameworks (DQA) to check for completeness of key variables (e.g. age, gender, diagnosis codes), timeliness, and consistency. In sentinel-like networks, each data partner runs data checks and publishes results. For regulatory submissions, sponsors often include a data management plan describing these checks.
Metadata and lineage tracking are critical. EHR data often require extracting information from warehoused clinical systems. Linkages (e.g. connecting EHR and claims) must be validated (via patient IDs or probabilistic approaches) to ensure patients are not double-counted or mismatched. Missing data strategies must be pre-specified – for example, if 30% of lab values are missing, how will that affect analyses? FDA guidance notes that RWD studies should transparently report data limitations and conduct sensitivity analyses (e.g. best/worst case scenarios) ([30]).
Privacy is also a consideration. Regulatory-grade platforms typically use de-identified data or federated queries where patient-level identifiers never leave the data custodian. HIPAA and GDPR compliance are mandatory, often requiring data use agreements.
Study Design and Methodology
Designing an RWE study to support a regulatory claim requires transparency and rigor. Key elements include:
-
Pre-specified Protocols: Though not FDA’s own requirement, it is strongly recommended (and often required by FDA for RCTs) that an observational study be pre-registered and conducted under a written protocol ([35]). The protocol should define objectives, endpoints, inclusion/exclusion criteria, data sources, statistical analysis plans, and sensitivity analyses. The FDA guidance and ICH E9(R1) emphasize this for clarity on “estimands”.
-
Clear Objectives and Hypotheses: What exact question is the study answering? Is it to demonstrate non-inferiority of a treatment to standard care, or to show event rate reduction compared to a historical baseline? The study design must align with the regulatory aim (e.g. new indication vs safety signal).
-
Comparative Study with Control: Regulatory evidence typically demands comparisons. This may be internal (concurrent controls) or external. For RWE, external control arms (using RWD) are common when RCT controls are unavailable or unethical ([13]) ([30]). For example, in rare diseases or oncology, a historical control cohort from an RWD source can serve as a comparator. The FDA/E10 guidance cautions that external controls bring confounding and should only be used when large effects and predictable disease courses exist ([30]). Thus, regulatory-grade RWE often involves sophisticated matching or stratification (propensity score matching, inverse probability weighting) to emulate randomization on observed confounders.
-
Addressing Confounding and Bias: Unlike an RCT, observational RWD studies have no random allocation, so meticulous attention to confounding factors is essential ([19]) ([30]). Platforms and researchers use advanced methods – propensity scores, multivariable regression, high-dimensional confounder adjustment – to balance comparison groups on baseline risk factors. They may define a target trial and attempt to replicate its design in the observational setting ([13]). Sensitivity analyses (e.g. negative control outcomes, quantitative bias analysis) can demonstrate robustness. Importantly, data dredging must be avoided: outcomes should not be chosen based on data inspection.
-
Endpoints and Follow-up: The outcomes chosen must be carefully justified. Often only “hard” endpoints (mortality, hospitalization) are credible in RWD because they are reliably captured. Surrogate or subjective endpoints (pain scores, quality of life) are harder to measure from RWD sources. Regulatory-grade RWE should preferably focus on outcomes clearly recorded in the data. The time frame of follow-up must also match the study objective (e.g. 6-month mortality vs 30-day readmission).
-
Regulatory Benchmarks: The substantial evidence threshold (21CFR314.126) still applies. An RWE study must be as rigorous as “adequate and well-controlled” trials in its essentials: clear group definition, bias control, and systematic outcome measurement ([18]). Sponsors often discuss proposed RWE study designs with regulators ahead of submissions (through pre-IND or protocol assistance meetings) to align expectations.
Analytical Implementation
Once the design is set, execution involves statistical and computational steps:
-
Cohort Construction: Using the platform’s query tools or code, investigators define inclusion/exclusion based on diagnoses, treatments, labs. For example, in TriNetX one might query “patients with myocardial infarction given Drug A, excluding those on Drug B.” The platform returns cohort counts and characteristics. Researchers will refine criteria to match the target population.
-
Worst-Case/Best-Case Analysis: Analysts often run multiple statistical models. A primary analysis might use propensity-score-matched cohorts; a secondary analysis might use inverse-probability weighting or stratification. If a randomized subset exists, they may calibrate methods. All analyses should be pre-specified or, if changes are needed, fully documented with justification.
-
Transparency and Reproducibility: Ideally, code and algorithms (e.g. for propensity scoring) are shared or made available. External reviews (or audits) of the analytics guard against p-hacking. In a submission, sponsors often include detailed statistical analysis plans.
-
Validation and Sensitivity Checks: To build credibility, studies might include methods like emulating a known RCT or using a negative control outcome. For example, a drug-versus-internal-control analysis in RWD that replicates a published clinical trial result can bolster confidence. If an estimated effect is marginal, regulators may scrutinize whether residual confounding explains it. Therefore, sensitivity analyses (quantitative bias analysis) are important to show the effect is robust to unmeasured confounding assumptions.
-
Data Standards in Analysis: Results and datasets for submission typically adhere to CDISC standards (the same standards used for clinical trials). For instance, a sponsor might convert the final analytical dataset to ADaM format or SDTM to integrate into the NDA/MAA submission. Use of standard terminologies (MedDRA for adverse events, LOINC for labs, etc.) is also expected to ensure consistency.
Governance and Ethics
RWE studies for regulatory use must also meet ethical standards. Even though data are de-identified, sponsors often obtain Institutional Review Board (IRB) or ethics committee rulings that the study poses minimal risk. Data use agreements must allow for research analysis. Patient privacy must be protected (for example, rare disease data may need statistical disclosure control).
Sponsors frequently involve patient or clinician experts in RWE planning. EMA has encouraged multi-stakeholder collaboration. In the U.S., networks like PCORnet include patient representatives; projects using Sentinel may involve community advisory groups. This stakeholder input can improve study relevance and ensure ethical oversight.
Data Analysis and Evidence
Trends and Statistics on RWE Use
Recent studies illustrate how RWE is entering regulatory practice:
-
A 2025 study of FDA label expansions (supplemental NDAs/BLAs) found that 23–27% of approvals (2022–2024) included RWE in the submission ([3]). In absolute terms, out of 218 label expansion submissions, 55 involved RWE (3 explicitly cited in FDA documents, 52 identified externally) ([3]). The use of RWE was most common in oncology (43.6% of RWE submissions) ([36]). The majority of these RWE studies were retrospective cohort designs (66%) using EHR data (75% of studies) ([37]). These findings confirm that while not yet ubiquitous, RWE is a significant and growing component of regulatory packages in certain fields.
-
A 2020 literature review identified 27 case examples (1998–2019) across FDA, EMA, PMDA, HC of regulatory approvals aided by RWD ([17]). Of these, 17 were New Drug Applications (all orphan drugs) and 10 were line extensions ([17]). Common data sources were medical records and registries. In nearly all cases the RWD either provided tolerability/efficacy evidence in lieu of a control arm or supplemented trial findings. Notably, drugs with RWE support tended to be orphan indications or oncology, reflecting situations where trials are challenging ([17]) ([38]).
-
European simulation: One analysis focusing on rare disease MAAs (EMA) predicted that additional RWE could expand indicated populations compared to RCT-only submissions ([39]). This supports the intuitive notion that RWE often includes sicker or broader patients than trial cohorts.
-
On a broader scale, analytic market reports estimate that RWE solution platforms (software, data services) form a multi-billion-dollar market that is rapidly growing (projected to exceed $3–5B by 2030, though such figures come from market analysts rather than academic sources).
-
From the regulator’s side, EMA’s RWD dashboard reported 59 studies in 2024, up 47% from the prior year ([4]). FDA’s own data portal (FAERS, Sentinel) now routinely incorporates external data queries. A 2025 survey of pharma companies indicated that nearly 90% have ongoing RWE projects, with many targeting regulatory submissions (real-word evidence being used for label expansions, comparative effectiveness, and pediatric extrapolations).
These data points underscore key patterns: RWE use is still niche but accelerating, especially in orphan/oncology, and facilitated by new data capabilities. Regulatory agencies on both sides of the Atlantic are actively encouraging appropriate RWE, as evidenced by frameworks and studies.
Evidence from Case Studies
Beyond statistics, concrete examples illustrate how RWE can meet regulatory needs:
-
Orphan/Registry-based Approvals: Voxzogo (vosoritide) for achondroplasia is exemplary. Because achondroplasia is rare (genetic dwarfism), running large trials is hard. The sponsor supplemented a 30- to 36-month open-label study with data from the Achondroplasia Natural History multicenter study (a registry) to serve as an external control ([5]). The FDA reviewers noted that without active treatment, children reach a certain growth rate; by comparing actual growth on drug versus registry data, treatment effect could be inferred. This RWE arm met the standard of “substantial evidence” in context.
-
Post-Approval Efficacy Supplement: Consider an example where a drug approved in adults seeks pediatric labeling. If pediatric RCTs are not feasible, sponsors have turned to RWE (sometimes from registries) to provide indirect evidence. (For instance, patient chart reviews might show how pediatric patients respond on standard care vs. expected growth.) Such extrapolation is governed by FDA/EMA pediatric guidance but RWE can reduce or replace clinical studies if properly validated.
-
Safety Labeling Revisions: RWE has long been used to update safety labels. A prominent recent example is pediatric FDA label changes for beta-blockers (Inderal, Lopressor, etc.) when retrospective analysis of the Sentinel database found hypoglycemic events associated with pediatric beta-blocker use ([31]). The FDA used this RWD analysis to justify adding warnings in drug labels. This demonstrates RWE’s role in safety monitoring and label maintenance.
-
Registered Trials Leveraging External Data: In some trials (often single-arm registrational studies), RWD are woven into the primary analysis. Nulibry’s example falls here: patients receiving compassionate use across many countries were effectively pooled (from 15 countries) to form the treatment group, and a matched natural history cohort served as control ([6]). This replaced a randomized trial in a fatal pediatric disease where no treatment is available, illustrating how RWE can fill in when RCTs are impossible.
-
External Control Arm Feasibility: Platforms like TriNetX advertise that they can construct an “external control arm” by matching real-world patients to a trial’s inclusion criteria. A published Gerontology study, for example, used TriNetX data to replicate outcomes of a diabetes trial in a real-world cohort (propensity-matched) ([40]). While this was a journal publication (not regulatory), it shows concept viability. Moreover, some regulatory submissions have cited TriNetX or similar networks to support control groups (even if details are often redacted in public documents).
-
Multi-Region Studies for Vaccines/HTA: The EMA has employed DARWIN for vaccine rollout evaluation (e.g. mpox vaccine analysis across EU) ([4]). Similarly, global networks enabled rapid RWE studies during the COVID-19 pandemic (e.g. on vaccine effectiveness or treatment comparators) that informed FDA/EMA decisions without conventional trials.
Each case underlines a principle: fit-for-purpose RWE can either stand alone or complement trials, but must be of high methodological quality. The FDA and EMA reviews explicitly categorize the “role of RWE” – whether confirmatory, supportive, or primarily safety-oriented ([6]) ([31]). A reviewer’s trust in RWE increases when its design mimics an RCT structure (with clear inclusion criteria, blinding of analysis where possible, and pre-specified endpoints) ([35]) ([19]).
Discussion of Implications and Future Directions
Regulatory Implications for Pharma
The increasing acceptance of RWE poses both opportunities and challenges for pharmaceutical companies:
-
Opportunity for Label Expansion: RWE can unlock additional indications. For rare and hard-to-study conditions, demonstrating efficacy via RWE can achieve approvals that would otherwise stall. Companies can plan prospective generation of RWE (e.g. registry enrollment) alongside trials. Some sponsors have integrated post-market RWD collection as part of phase IV commitments, anticipating leveraging this data for post-approval indications.
-
Enhanced Pharmacovigilance: Regulators are actively using RWE in safety surveillance (e.g. Sentinel for adverse events). This means pharma must ensure robust post-market RWD plans, including ready-to-analyze registries or EHR data collaborations. Speed of analysis is key: agencies now expect “near-real-time” monitoring (once a year increases in studies ([4])). Some companies have partnered with networks like OHDSI or established private data hubs to conduct rapid safety queries.
-
Cost and Efficiency: Well-designed RWE studies can reduce or replace some costly trials, lowering development costs and timelines. However, pharma must invest in capabilities: skilled epidemiologists, data scientists, and in-house or contracted platforms. Developing internal standards for RWE (analogous to GCP for trials) is underway in many firms. The return on investment depends on regulatory outcomes – if RWE enables even one additional or faster approval, it can be game-changing.
-
Data Access and Partnerships: Access to RWD often requires partnerships. Pharma may need to collaborate with data vendors, healthcare systems, or institutions. For example, multi-sponsor initiatives have emerged (e.g. consortia for rare disease registries) to share RWD costs. Cross-industry standards and consortia (like Transcelerate’s RWE Corridor) aim to streamline access.
-
Regulatory Interactions: The distinct evidentiary context of RWE means that regulatory scrutiny will differ. Submissions involving RWE should provide detailed methodology. The FDA’s RWE guidance suggests early interaction (pre-IND or meeting requests) to align on analysis plans. In the EU, it is prudent to engage with EMA’s patient registries or scientific advice committees when using RWD for label changes.
Challenges
-
Heterogeneity of Data: Even with CDMs, underlying differences (different healthcare systems, variable coding practices) create residual heterogeneity. Ensuring that an external control arm built from one country’s registry truly matches a trial population from another country is nontrivial. Cross-jurisdictional RWE (e.g. global studies) require careful calibration.
-
Bias and Uncertainty: No matter how rigorous, RWE lacks randomization. Unmeasured confounding can never be entirely ruled out. Skeptical reviewers will question any RWE analysis that yields moderate effect sizes. The threshold for “dramatic effects only”—as per E10 guidance ([30])—is sometimes interpreted strictly by regulators (some believe only large effects will be accepted without RCT). Achieving regulatory acceptance with modest effect sizes will remain difficult without some random component or novel quasi-experimental design.
-
Regulatory Acceptance Variability: FDA and EMA still have different cultures. Historically, FDA has been more open (21st Cures Act) to RWE for drug efficacy, while EMA has been more conservative (except for safety). Sponsors must tailor RWE strategies accordingly. For example, an approval supported mainly by RWE might be more plausible in the U.S. for a supplemental indication than in the EU.
-
Ethical and Legal: GDPR and data protection laws in Europe impose strict requirements on patient data use. Even de-identified data can be sensitive if rare conditions are involved. Ensuring compliance can slow projects. On the ethics front, there’s debate on when RWD studies require patient consent.
-
Quality Standards and Harmonization: There is no single “RWE regulatory standard”. Agencies and industry are still converging on best practices. For instance, should RWE protocols be registered (like clinicaltrials.gov)? Should analyses be audited by external parties? Ad hoc, one-off RWE studies may be harder to validate than large, pre-specified trial-like studies. The professional community (e.g. ISPE, ISPOR) is working on checklists and “good RWE practice” standards (see sidebar below).
Sidebar: RWE Methodology Standards (examples)
- ISPE Good Pharmacoepidemiology Practices (GPP) – principles for design and reporting of observational drug-outcome studies.
- FDA Guidance on Sentinels/Registries – e.g. using registries for stemi for device approvals.
- ICHE9(R1) / Estimands – framework to define treatment effects in complex settings (applies to RWE trial emulations too).
- Target Trial Emulation Framework – explicit design-based framework to mimic an RCT using observational data ([13]). Sponsors and regulators may require evidence that these and related guidelines were followed.
Future Trends
Looking ahead, several developments will shape regulatory-grade RWE:
-
AI and Advanced Analytics: Machine learning will play a growing role in RWE generation. For example, NLP algorithms can extract endpoints (e.g. identifying disease progression from radiology reports) that are not coded. Predictive models can flag safety issues earlier. However, regulators will demand validation: black-box models must not obscure study reproducibility. FDA’s recent interest in AI (through the CDER Real-World Evidence Innovation Center) suggests more guidance is forthcoming on trustworthy AI in clinical evidence generation ([2]).
-
Wearables and Digital Biomarkers: New RWD sources (smartphone data, wearables) are just starting to be used for endpoints like activity or sleep quality. In time, these could supplement RWE platforms, especially in chronic diseases. For example, continuous glucose monitors generate real-world glycemic data. Regulators will need to establish how to incorporate and validate these novel data streams.
-
Global Data Integration: Currently, each platform is often region-specific. Future trends include better cross-platform interoperability. The fact that FDA, EMA, PMDA, and NMPA align on many RWE principles ([29]) may lead to more international RWE studies, where a single federated analysis pool data from multiple countries’ EHRs. Initiatives like ICMRA (International Coalition of Medicines Regulatory Authorities) have begun discussing global RWE networks.
-
Regulatory Guidance Harmonization: As RWE use matures, we can expect formalized harmonized guidelines (possibly ICH-level) on RWD. Already, the ICH is revising E6 (GCP) for decentralized trials and RWD; and an EMA reflection on RWE in oncology is in progress. Divergent practices between agencies could converge over the next 5–10 years, making it easier for global submissions.
-
Commercialization of RWE Platforms: The RWE analytics industry is expanding, with new entrants and consolidation (e.g. Datavant acquiring Aetion). Cloud computing and data-sharing technologies (e.g. secure multi-party computation) will lower barriers. Pharma may increasingly subscribe to integrated RWE services rather than build them in-house.
-
Education and Culture: Both clinicians and regulators are learning about interpreting RWE. Over time, familiarity will increase trust, especially if high-profile successes accumulate. Training programs (e.g. joint academia-industry RWE workshops) are proliferating.
Tables
Table 1: Key RWE Data Platforms and Networks
| Platform/Network | Type/Operator | Data Sources | Coverage | Regulatory Relevance |
|---|---|---|---|---|
| TriNetX | Commercial (Roche) | De-identified EHRs from hospitals/clinics | Global (20+ countries) | Used for protocol feasibility, external controls; trials at U.S. FDA ([33]) |
| DARWIN EU | Public (EMA/EU HMA-EMA) | National EHRs, claims, registries | Europe (16 countries) | Provides RWE to EMA committees; 180M patients (2025) ([4]) |
| FDA Sentinel | Public (FDA) | Claims databases (Medicare, commercial) | USA (distributed network) | Used in post-market safety; expanded to EHR (RWE-DE project) ([23]) ([31]) |
| PCORnet | Academic network (USA) | EHR & some claims from multiple centers | USA (~47M patients) | Supports comparative effectiveness research; used in pediatrics and chronic disease studies ([32]) |
| OHDSI/OMOP | Academic/open consortium | EHR, claims, registries (mapped to OMOP) | Global (hundreds of DBs) | Enables multi-database RWE studies; methods validated against trials ([13]) |
| Aetion Evidence Platform | Commercial (Aetion/Datavant) | ERD, claims, registries via third parties | US/global (via vendors) | FDA contracted Aetion for COVID-19 RWE; platform used for submission-grade analyses ([34]) |
| Flatiron Health | Commercial (Roche) | Oncology clinic EHR, labs, pathology | USA (oncology focus) | Used in oncology drug submissions and outcomes research (e.g., comprehensive cancer registries) |
| IQVIA Real-World Solutions | Commercial | Insurance claims, EHR, prescriptions, labs | Global (~3B lives) | Broad patient data for epidemiology studies under GxP standards (used by regulators) |
| National/Local Systems | Mixed (gov’t & academic) | EHR/claims (e.g. NHS Digital, Kaiser) | Varies (nationwide) | Often used in RWE studies (e.g. UK CPRD for NICE/EMA; Kaiser data for FDA dialogue) |
Table 2: RWE Use Cases in Regulatory Submissions
| Drug/Study | Regulatory Context | RWD Source(s) | Study Design / Role of RWE | Reference |
|---|---|---|---|---|
| Voxzogo (Vosoritide) – achondroplasia | FDA NDA (2021) | International Achondroplasia NH registry | Single-arm trial + external control from registry; used as confirmatory evidence ([5]) | [FDA Act’s review] |
| Nulibry (Fosdenopterin) – MoCD Type A | FDA NDA (2021) | Multinational natural-history study + expanded access program data | Pooled single-arm data; external natural-history control; RWE provided primary endpoint (survival) ([6]) | [FDA Labeling] |
| Vimpat (Lacosamide) – pediatric epilepsy (loading dose) | FDA sNDA (2023) | PEDSnet network (EHR data) | Retrospective cohort (safety analysis for new regimen in children); supportive data for dosing ([24]) | [FDA Labeling] |
| Prolia (Denosumab) – CKD/ZGFR safety | FDA label (2024) | U.S. Medicare claims cohort | Retrospective cohort; identified increased hypocalcemia risk in CKD patients; triggered label warning ([25]) | [Sentinel study] |
| Abatacept (Orencia) – stem-cell transplant | FDA BLA (2021) | CIBMTR registry (cell therapy registry) | Non-interventional cohort vs matched controls; provided pivotal survival data for one donor-type arm ([41]) | [FDA Labeling] |
| Beta-Blockers (e.g. Metoprolol) – pediatric hypoglycemia | FDA label (2025) | Sentinel System (VHA data) | Retrospective cohort; identified hypoglycemia risk in children on beta-blockers; led to new safety language ([31]) | [FDA Communication] |
| Actemra (Tocilizumab) – COVID-19 | FDA EUA amendment (2022) | National death-records (NCHS data) | RCT with RWD endpoint; 28-day mortality endpoint measured via death registry ([8]) | [FDA review] |
(FDA references correspond to publicly available summaries, as cited in the FDA RWE tables ([8]) ([5]) ([6]).)
Conclusion
The landscape of drug development and regulation is undergoing a significant transformation as real-world data and evidence become integral to decision-making. Through extensive legislative, regulatory, and scientific initiatives, both FDA and EMA are constructing an ecosystem where “regulatory-grade” RWE – credibly generated from large-scale platforms – complements clinical trials. This report has detailed the historical drivers (e.g. 21st Century Cures, EMA Big Data) and current structures (DARWIN EU, Sentinel, TriNetX, etc.) supporting this shift. We have shown that, when properly designed and executed, RWE studies can meet evidentiary thresholds: they have already contributed to approvals, label expansions, and safety actions cited by regulators ([5]) ([6]).
Pharmaceutical companies must adapt by investing in data platforms, analytics expertise, and rigorous research methods. The ultimate goal is improving patient outcomes: by leveraging RWE, sponsors can address clinical questions unanswerable by trials alone – such as long-term safety in diverse patient pools, or treatment effectiveness in the real world. As FDA and EMA continue to issue guidance on data quality and study design (e.g. 2023 FDA guidance, ongoing EMA reflections), the bar for “regulatory-grade” RWE will become clearer. Stakeholders must stay engaged: from contributing high-quality data to these platforms, to developing best-practice standards.
In coming years, we anticipate greater harmonization between U.S. and EU requirements for RWE, as regulators globally recognize common principles of data reliability and transparency ([29]). Technological advances – AI for data curation, federated analytics across borders, digital biomarker integration – will further enhance capabilities. Ultimately, this convergence of data, technology, and regulation holds promise for more efficient drug development and regulatory review. Companies and regulators alike must maintain a balanced approach: embracing innovation in RWE, while upholding the scientific rigor essential to patient care.
References: (All claims above are supported by cited literature and regulatory documents, as indicated by the in-text citations.)
External Sources (41)

Need Expert Guidance on This Topic?
Let's discuss how IntuitionLabs can help you navigate the challenges covered in this article.
I'm Adrien Laurent, Founder & CEO of IntuitionLabs. With 25+ years of experience in enterprise software development, I specialize in creating custom AI solutions for the pharmaceutical and life science industries.
DISCLAIMER
The information contained in this document is provided for educational and informational purposes only. We make no representations or warranties of any kind, express or implied, about the completeness, accuracy, reliability, suitability, or availability of the information contained herein. Any reliance you place on such information is strictly at your own risk. In no event will IntuitionLabs.ai or its representatives be liable for any loss or damage including without limitation, indirect or consequential loss or damage, or any loss or damage whatsoever arising from the use of information presented in this document. This document may contain content generated with the assistance of artificial intelligence technologies. AI-generated content may contain errors, omissions, or inaccuracies. Readers are advised to independently verify any critical information before acting upon it. All product names, logos, brands, trademarks, and registered trademarks mentioned in this document are the property of their respective owners. All company, product, and service names used in this document are for identification purposes only. Use of these names, logos, trademarks, and brands does not imply endorsement by the respective trademark holders. IntuitionLabs.ai is an AI software development company specializing in helping life-science companies implement and leverage artificial intelligence solutions. Founded in 2023 by Adrien Laurent and based in San Jose, California. This document does not constitute professional or legal advice. For specific guidance related to your business needs, please consult with appropriate qualified professionals.
Related Articles

Real-World Evidence: A Guide to RWE Analysis & Application
Explore the state of Real-World Evidence (RWE) analysis in 2025-2026. This guide covers RWD sources, analytical methods, FDA/EMA regulatory frameworks, ICH E6(R3), synthetic control arms, AI-driven RWE platforms, and compares RWE with RCTs

Target Trial Emulation: A Framework for Causal RWE
Learn how target trial emulation provides a structured framework for drawing causal inference from real-world evidence (RWE), with 2025 updates on the TARGET reporting guideline, ICH M14, and FDA guidance.

FDA & EMA Good AI Practice Guide for Drug Development
Examine the FDA and EMA Good AI Practice guidelines. This comprehensive implementation guide details the 10 regulatory principles for AI in drug development.