Vertex Takes a Risk-based Approach to UAT
Veeva Systems Inc
/@VeevaSystems
Published: May 16, 2019
Insights
This video provides an in-depth exploration of how Vertex Pharmaceuticals successfully implemented a risk-based approach to User Acceptance Testing (UAT) for Electronic Data Capture (EDC) study builds, significantly reducing operational timelines and improving efficiency. The core premise challenges the traditional industry standard of performing 100% UAT on every study build, arguing that this practice often duplicates validation efforts already completed by developers and QA teams. The speakers advocate for a strategic shift where testing resources are concentrated only on high-risk, non-standard elements of a study.
The methodology centers on standardization and leveraging vendor capabilities. Vertex focuses on building standard configuration settings and pages, testing them rigorously once, and then placing them into a reusable library. For subsequent studies that utilize these pre-validated, standardized components, re-testing is deemed unnecessary. Instead, UAT efforts are hyper-focused on the study’s primary and secondary endpoints, which carry the highest risk to data integrity and regulatory submission. A critical enabler of this approach is a "diff report" provided by Veeva, which generates a simple report detailing exactly what configuration changes have occurred between a current study and a previous, fully validated study. If the report indicates no changes, the team can justify avoiding 100% UAT.
Beyond configuration testing, the discussion highlights major process improvements in how UAT is executed. The speakers critique the common "ping-pong" UAT approach—a sequential cycle where the vendor sends the database, the sponsor tests and provides comments, the vendor updates, and the process repeats—which often consumes four to five weeks. Vertex moved toward a "live UAT" roundtable model. In this accelerated approach, the vendor team (Veeva) sits concurrently with the sponsor’s UAT team. As the sponsor reviews and provides feedback or suggests edits, the software is updated and tested in real-time. This game-changing method allowed Vertex to compress what was typically three rounds of UAT into a timeframe as short as two days, drastically shortening the overall study build timeline by three to four weeks.
The analysis also extends to optimizing the programming of error checks within the EDC system. While many clinical teams program 300 to 500 error checks per study, the speakers suggest that this volume often leads to excessive definition, programming, and UAT time, particularly for checks that rarely "fire." The strategic recommendation is to focus upfront on the 90% of checks that are most critical for data quality and regulatory adherence. The remaining 10%, especially those that are complex or low-firing, can be handled later by running data listings or utilizing other business intelligence tools for data review, saving significant cross-functional team time spent on initial definition and UAT. Finally, the speakers propose reframing change orders not as failures to be avoided, but as strategic opportunities. By holding off on implementing complicated checks until the first inevitable change order, teams can accelerate the initial go-live and strategically update the system as the study progresses.
Key Takeaways:
- Adopt Risk-Based UAT: Blindly performing 100% UAT on every study build is inefficient and redundant, especially when configuration settings have already been tested and validated in previous studies.
- Standardize and Reuse Configurations: Build a library of standard configuration settings, pages, and components. Once these are tested and validated, they should be copied into subsequent studies without requiring re-testing, focusing UAT efforts only on novel or high-risk elements.
- Focus UAT on Endpoints: Concentrate User Acceptance Testing on primary and secondary endpoints, as these represent the highest risk areas for data integrity and regulatory submission.
- Leverage Vendor Diff Reporting: Utilize tools like Veeva’s diff report, which automatically identifies configuration changes between studies. If the report shows no change, it provides the justification needed to bypass full UAT.
- Eliminate "Ping-Pong" UAT: Move away from the sequential, time-consuming UAT process (which can take 4-5 weeks) where feedback is exchanged asynchronously between sponsor and vendor.
- Implement "Live UAT" Sessions: Conduct UAT as a real-time roundtable with both the sponsor's team and the vendor's development team present. This allows for immediate feedback incorporation and real-time testing, drastically shortening the UAT cycle (e.g., three rounds compressed into two days).
- Optimize Error Check Volume: Clinical teams should critically review the necessity of having 300–500 error checks per study. Focus on programming the 90% of checks that are most essential for quality data capture.
- Shift Low-Risk Checks Post-EDC: Complex or low-firing error checks (the remaining 10%) should be managed by running data listings or utilizing external data review and Business Intelligence tools post-data capture, rather than spending extensive upfront time programming and UAT-ing them within the EDC.
- Strategic Use of Change Orders: View change orders as an opportunity rather than a hurdle. Strategically hold off on implementing complicated, non-critical checks until the first expected change order to accelerate the initial go-live timeline.
- Prioritize Quality over Quantity of Checks: The goal is building a quality database, which can often be achieved with a focused set of critical checks, rather than maximizing the total number of programmed checks, which increases complexity and validation burden.
Tools/Resources Mentioned:
- Veeva Systems: The underlying platform vendor for the EDC system discussed.
- Veeva Diff Report: A specific reporting tool that shows configuration changes between study builds, enabling risk-based testing decisions.
- EDC (Electronic Data Capture): The system used for clinical trial data collection.
Key Concepts:
- UAT (User Acceptance Testing): The final stage of testing where end-users verify that the system meets business requirements before deployment.
- Risk-Based Approach: A testing methodology that prioritizes validation efforts based on the potential impact or risk associated with specific system components or configurations.
- Ping-Pong UAT: The traditional, sequential, back-and-forth process of UAT between a vendor and a sponsor, characterized by long cycle times.
- Live UAT: A concurrent testing methodology where vendor and sponsor teams collaborate in real-time to review, update, and test the system immediately.
- Error Checks: Automated rules programmed into the EDC system to ensure data quality and consistency during data entry.