Back to Articles
IntuitionLabs4/18/202525 min read
Computer Vision in Pharmaceutical Quality Control: Enhancing Drug Manufacturing

Computer Vision in Pharmaceutical Quality Control: Applications, Techniques, and Case Studies

Introduction

Quality control (QC) in pharmaceutical manufacturing is critical for ensuring patient safety and regulatory compliance. Traditionally, QC relied on manual inspection and rule-based machine vision, which have limitations in consistency and flexibility (Industry Insights: Quality Inspection with AI Vision: Utilizing Synthetic Data for Testing and Training) (Industry Insights: Quality Inspection with AI Vision: Utilizing Synthetic Data for Testing and Training). Human inspectors can be inconsistent due to fatigue and subjectivity, and classical rule-based vision systems require predefined criteria for every defect and are easily disrupted by environmental changes (Industry Insights: Quality Inspection with AI Vision: Utilizing Synthetic Data for Testing and Training). With the rise of Pharma 4.0 – the industry's shift toward digitization and automation – advanced computer vision (CV) techniques powered by artificial intelligence (AI) (see our overview of Data Science in Life Sciences) are transforming pharmaceutical QC (Real-time visual intelligence for defect detection in pharmaceutical packaging - Scientific Reports). Modern CV systems leverage deep learning, enabling real-time analysis of products on high-speed production lines and detection of subtle defects beyond the capability of older systems (Real-time visual intelligence for defect detection in pharmaceutical packaging - Scientific Reports) (Amgens Deep Learning Approach To Vial Inspection). This article provides a comprehensive overview of how computer vision is applied across various stages of pharmaceutical manufacturing, the techniques employed (e.g. convolutional neural networks, object detection, anomaly detection, segmentation, OCR), recent case studies (post-2020), and the unique challenges (imbalanced data, transparent materials, real-time constraints, regulatory compliance) in this domain. The discussion is geared toward computer vision researchers interested in industrial applications and highlights both broad concepts and technical details.

Applications of Computer Vision in Pharmaceutical Manufacturing

Computer vision techniques now permeate almost every stage of pharmaceutical production. Key application areas include raw material and incoming goods verification, inspection of tablets and capsules, packaging inspection (blister packs, bottles), labeling and print verification, and contamination detection in injectables. Below, we explore each in detail, with examples of the vision tasks involved and solutions reported in recent practice or literature.

Raw Material and Incoming Material Verification

Before production begins, manufacturers must verify that incoming raw materials and packaging components are correct and meet specifications. Machine vision plays a role in automating these incoming inspections. A common use is label and code verification on raw material containers (barrels, bags, etc.) and packaging materials. Vision systems can read barcodes and data matrix codes to confirm material identities and track them through the supply chain (Categories of vision systems and the applications they perform - Cognex). For example, 1D/2D code readers are used to scan batch labels on raw chemical containers, ensuring the right ingredient is used and enabling full traceability from raw material to finished product (Categories of vision systems and the applications they perform - Cognex). In addition to codes, optical character recognition (OCR) can be applied to read printed text on labels or certificates of analysis. This helps verify that material names, lot numbers, and expiry dates on received goods match the master records for the order (Vision system validates medical containers - Vision Systems Design). Such automated checks replace labor-intensive manual cross-checking and significantly reduce errors.

Another aspect is packaging components inspection. Pharmaceutical packaging components (labels, cartons, inserts) arriving from suppliers must exactly match the approved artwork and content. Vision systems can perform high-resolution comparisons of incoming samples against a "golden template" (the approved master label) to catch printing errors or mix-ups (Vision system validates medical containers - Vision Systems Design). For instance, an inspection system might compare an incoming label's text and layout to the master copy pixel-by-pixel, flagging even minor discrepancies in dosage instructions or misprints that human eyes might miss (Vision system validates medical containers - Vision Systems Design) (Vision system validates medical containers - Vision Systems Design). Given the thousands of stock-keeping units (SKUs) and multilingual variants in pharma, automating this proofreading yields major efficiency gains. In one reported case, implementing an automated vision proofreader for incoming labels and inserts improved accuracy and cut inspection time by ~70% (Vision system validates medical containers - Vision Systems Design). Ensuring that all raw materials and components are correct at the outset prevents costly downstream errors.

Tablet and Capsule Inspection

Solid dosage forms like tablets and capsules are produced in massive quantities (often hundreds of thousands or millions per day), and each unit must be inspected for defects and consistency. Computer vision-based inspection at this stage focuses on detecting physical defects, dimensional accuracy, and appearance of each pill or capsule. Common defects include broken or chipped tablets, coating defects (like discoloration or spots on film-coated tablets), incorrect size or shape, embossed text errors, and capsule fill anomalies. These defects can be subtle and highly variable (Tablet Inspection Using AI - SOLOMON 3D). Automated Optical Inspection (AOI) machines historically struggled to reliably detect all such defects due to hard-coded algorithms and variations in color or lighting (Tablet Inspection Using AI - SOLOMON 3D). Modern AI-based vision systems address this by learning the range of normal appearance and flagging anything outside that range.

Deep learning techniques, particularly convolutional neural networks (CNNs), have been deployed for tablet surface inspection. For example, researchers have trained CNN models to identify cosmetic defects on film-coated tablets (such as coating bubbles, scratches, or logo defects) with high accuracy (Deep learning-based defect detection in film-coated tablets using a convolutional neural network). In one recent industry implementation, an AI vision system was used to inspect printed characters on tablets (some tablets have an identifying imprint or inked code). Traditional systems often missed smudged or partially printed codes at high line speeds, or they might reject entire batches if a few tablets had printing defects (How AI and Machine Vision Improve Pharmaceutical Product Quality and Yield - Cognex). By using deep learning, the system could individually inspect hundreds of thousands of pills for legibility and defects, reducing false rejects and improving yield (How AI and Machine Vision Improve Pharmaceutical Product Quality and Yield - Cognex). Another case study from 2025 by Merck researchers applied a deep CNN to detect defects in film-coated tablets, achieving reliable detection without the need for precisely fixturing each tablet in a tray (earlier methods required tablets to be placed in fixed positions for inspection) (Deep learning-based defect detection in film-coated tablets using a convolutional neural network). This demonstrates more robust, flexible inspection even when tablets are randomly oriented on a conveyor.

For capsules (gelatin capsules), vision inspection checks for correct shape (no deformed capsules), integrity (no cracks or splits), and proper sealing. Since capsules can be transparent or glossy, advanced imaging and lighting are used to reduce glare. If capsules are transparent, background differentiation becomes challenging; techniques like semantic segmentation can help separate the capsule outline from the background to inspect its fill level or detect bubbles inside. One notable approach for capsules and other products with limited defect samples is anomaly detection. Instead of training on every possible defect (which is impractical due to class imbalance), models are trained on many images of good capsules and learn a representation of "normal." Any capsule that deviates significantly (e.g., different texture or shape) is flagged as anomalous. Open datasets like MVTec Anomaly Detection (AD) include a "capsule" category, providing defect-free capsule images for training and various defective capsules (e.g., cracks, contamination) for testing (MVTec Anomaly Detection Dataset: MVTec Software). Such datasets encourage development of unsupervised anomaly detection methods specialized for pharma components. By applying these methods, an inspection system can detect novel defects on capsules without having explicitly seen them during training – a crucial capability given the rarity of some defects (and hence the difficulty of obtaining training images). Manufacturers have also started using continuous learning systems that can incorporate newly identified defect samples into the model over time (Tablet Inspection Using AI - SOLOMON 3D). This means if a new type of tablet defect appears, the system can learn to detect it in the future, improving continuously.

Blister Pack Inspection

Blister packs are a common packaging format for tablets and capsules, where individual doses are sealed in pockets of a plastic tray with a foil backing. Blister pack inspection is a classic application of machine vision in pharma packaging lines. The goal is to ensure each pocket has the correct pill, and no pockets are empty, broken, or contain fragments/foreign objects, before the foil is sealed. It also verifies that the tablets in each pack are of the right type (correct shape/color) and properly aligned.

Historically, blister inspection was partly manual or used simple sensors (like checking for presence/absence via light). Now, high-speed vision cameras capture images of each tray, and deep learning object detection algorithms identify and assess each pocket. State-of-the-art object detection models like YOLO (You Only Look Once) and region-based CNNs are used to locate pills in the blister and classify any defects (Real-time visual intelligence for defect detection in pharmaceutical packaging - Scientific Reports) (Real-time visual intelligence for defect detection in pharmaceutical packaging - Scientific Reports). A recent study (Scientific Reports, 2024) developed a YOLOv8-based model for real-time blister inspection, targeting defects such as broken or half tablets, missing tablets, and misaligned tablets in the pack. The model, called CBS-YOLOv8, achieved an impressive 97.4% mAP (mean average precision) on a custom blister defect dataset while running at ~79 frames per second, outperforming earlier approaches (Real-time visual intelligence for defect detection in pharmaceutical packaging - Scientific Reports). In comparisons, the single-stage detector (YOLO family) far exceeded the accuracy of a two-stage detector like Faster R-CNN (which reached ~89% mAP) on this task (Real-time visual intelligence for defect detection in pharmaceutical packaging - Scientific Reports). This underscores the effectiveness of modern object detectors for high-variation, high-speed inspection tasks. The YOLOv8 system was deployed on a conveyor handling blister packs at 12 Hz, using a high-resolution industrial camera, and was able to detect five types of tablet defects in real-time (Real-time visual intelligence for defect detection in pharmaceutical packaging - Scientific Reports).

Besides presence detection, vision systems inspect the seal integrity of blister packs. After the foil lidding is applied, systems use specialized lighting (e.g., angled or infrared) to find any leaks or improper seals – for instance, looking for dark voids or patterns in the aluminum foil that indicate a missing tablet or a puncture. Some advanced systems even do 3D inspection of blister surfaces (using stereo cameras or laser profilometers) to ensure each pocket is properly filled to the correct height, which helps catch broken tablets or half-filled capsules. Ensuring every blister cell is correctly filled and sealed is crucial: a single empty pocket is a critical defect that would trigger batch rejection. By using robust CV checks at this stage, manufacturers can automatically reject faulty blister cards early, preventing them from reaching patients.

Bottle and Vial Inspection

Many medications (such as oral solids and capsules) are packaged in bulk in bottles, while liquid pharmaceuticals (injectables, vaccines) are filled into vials or ampoules. Bottle inspection in the context of oral solid dosage typically involves verifying the count and integrity of pills in each bottle and ensuring correct packaging. Vision systems are sometimes used to count pills as they enter a bottle – for example, a camera can look down through a transparent bottle to count tablets, or image the tablets on a slat counter just before they drop in. However, counting is often done by other means (counters or weight); a more common vision task for bottles is to check for missing or broken tablets before capping. If the bottle is transparent, a vision sensor can detect if a tablet is broken (unusual shapes among the pills) or if foreign tablets of a wrong color are present (indicating a mix-up). Additionally, after capping, vision verifies the presence and tightness of caps, the presence of induction seals, and the correctness of labeling (which overlaps with the labeling inspection below).

For liquid vials and ampoules, automated visual inspection is mandated for every unit (100% inspection) to detect particulate contamination, cosmetic defects, fill level, and container integrity. Traditional Automatic Visual Inspection (AVI) machines use rule-based algorithms to detect particles in the liquid by analyzing images or short videos of vials under rotation (to swirl any particles). However, these systems can produce a high rate of false rejections – for instance, misinterpreting innocuous bubbles or glass reflections as contaminants (Amgens Deep Learning Approach To Vial Inspection). Jorge Delgado of Amgen noted that conventional AVI might falsely reject up to 20% of good vials because of such confusion (glare vs. crack, bubble vs. particle) (Amgens Deep Learning Approach To Vial Inspection). Computer vision with deep learning is making inroads here by improving discrimination. By training CNNs on large datasets of vial images, including examples of true defects and false triggers, the models learn to distinguish real foreign particles from air bubbles or glass fibers with much higher accuracy (Amgens Deep Learning Approach To Vial Inspection). In Amgen's implementation, a deep learning-enhanced inspection system was added alongside the rule-based algorithms, which reduced false positives while maintaining (or improving) true defect detection rates (Amgens Deep Learning Approach To Vial Inspection). This hybrid approach (AI + traditional) leverages the reliability of classical checks for simple defects (e.g., color changes or large flaws) and the intelligence of DL for complex judgments (Amgens Deep Learning Approach To Vial Inspection). It enabled higher detection sensitivity and fewer good vials being wrongly rejected – a significant improvement in efficiency and cost.

Besides particle detection, vision systems check fill levels in vials. Low fills or high fills outside tolerance are quality issues. Vision can do this by either analyzing the meniscus line in transparent vials or using 3D sensors. Deep learning models have also been applied to this problem; for example, the 2024 study mentioned earlier used their model on a public dataset of saline bottle fill levels, achieving >99% detection accuracy for fill level anomalies (Real-time visual intelligence for defect detection in pharmaceutical packaging - Scientific Reports). Another critical aspect is container integrity: vision systems inspect for cracks in glass vials, scratches or defects in the stopper or crimp, and any leakage. With high-resolution cameras, even tiny cracks can be detected (sometimes using dark-field lighting to make cracks glow). Any container with a crack or seal defect is automatically rejected to prevent sterility breaches.

Labeling and Packaging Verification

Before pharmaceutical products leave the manufacturing facility, every package must be correctly labeled and traceable. Vision systems are indispensable in labeling and packaging verification, where they perform final QC on the product appearance and identification marks. Key tasks include: verifying printed text (OCR/OCV), checking 1D/2D barcodes, ensuring correct label placement, and confirming that all required components (like patient inserts) are present in the package.

OCR (Optical Character Recognition) and OCV (Optical Character Verification) are heavily used to read pharmaceutical labels and packaging print. Regulations require labels to contain specific information – product name, strength, lot number, expiration date, etc. – and any error can lead to a recall. AI-driven vision systems scan each label or carton in real-time to read these fields and verify them against the database. The small font sizes and varying print quality can be challenging for humans, especially at high throughput, but OCR algorithms (now often CNN-based for text detection and recognition) handle this at production speeds (How AI and Machine Vision Improve Pharmaceutical Product Quality and Yield - Cognex) (How AI and Machine Vision Improve Pharmaceutical Product Quality and Yield - Cognex). For instance, a vision system on a packaging line might capture an image of every bottle's label and decode the lot code and expiry date with OCR. The results are then compared to the expected values for that batch (OCV). If a mismatch is found (say the expiry date printed is incorrect), the system triggers a rejection of that item (How AI and Machine Vision Improve Pharmaceutical Product Quality and Yield - Cognex). By doing so 100% in-line, it eliminates the risk of a mis-labeled product reaching the market. Modern deep learning text recognition (such as LSTM or transformer-based OCR models) are robust to printing variations and can even read codes on curved surfaces like bottles with high accuracy.

Barcode reading for serialization is another vital vision application. As of 2023, regulations like the U.S. Drug Supply Chain Security Act (DSCSA) require every saleable pharmaceutical unit to carry a unique identifier (usually a GS1 DataMatrix 2D barcode) for track-and-trace (Using Machine Vision to Meet New Pharma Compliance Rules) (Using Machine Vision to Meet New Pharma Compliance Rules). Machine vision systems equipped with specialized code readers scan these barcodes at high speed on packaging lines to record each code and verify its quality. Unlike conventional laser scanners, image-based readers can also check print quality and detect if a code is damaged or partially missing. This ensures that each package's code is readable down the supply chain. Additionally, vision systems handle aggregation: associating which individual packages went into a larger carton (and reading the carton's code), and which cartons went onto a pallet, creating a parent-child traceability hierarchy. High-resolution cameras capture pallet loads to read all visible codes in one shot for verification. The traceability data from these vision checks (all codes and timestamps) is fed into manufacturing IT systems to create an electronic pedigree for each drug (Using Machine Vision to Meet New Pharma Compliance Rules) (Using Machine Vision to Meet New Pharma Compliance Rules). Any unreadable or duplicate code triggers an alarm to ensure compliance. Thanks to machine vision, companies can reliably meet serialization requirements – scanning billions of products annually – and avoid the heavy fines for non-compliance (Using Machine Vision to Meet New Pharma Compliance Rules) (Using Machine Vision to Meet New Pharma Compliance Rules).

Beyond codes, vision is used to inspect label presence and placement. A camera can verify that each bottle actually has a label affixed, that it's not skewed or bubbling, and that the correct label (for the correct product) is on the bottle. This often involves color pattern matching or template matching (to ensure the artwork is correct). If the production line produces multiple product types, vision systems can do an automated line clearance check: confirming that, after a product changeover, no old labels or packaging materials remain (avoiding mix-ups).

Finally, packaging verification extends to contents check: ensuring each carton has the right number of blister packs or the patient information leaflet inserted. Some systems use vision sensors to look inside open cartons to detect the presence of the paper leaflet by its shape or markings. Others might use weight sensors for this, but vision can add an extra layer, especially if multiple inserts need to be present. All these checks collectively guarantee that the finished pharmaceutical product is correctly packaged, labeled, and traceable when it leaves the factory (How AI and Machine Vision Improve Pharmaceutical Product Quality and Yield - Cognex) (Using Machine Vision to Meet New Pharma Compliance Rules).

Computer Vision Techniques and Methods in Pharma QC

A variety of computer vision techniques are employed to accomplish the above applications. These include both classical image processing and modern AI-based methods. Below we outline the key techniques, with an emphasis on those using deep learning, and how they are applied in pharmaceutical QC:

  • Convolutional Neural Networks (CNNs): CNNs are the backbone of most deep learning vision systems in pharma. They are used in classification tasks (e.g., determining if a pill is defective or not) and as feature extractors in more complex models. For instance, a CNN classifier might be trained to distinguish images of good vs. defective tablets. Pathak et al. (2025) used a CNN to detect defects on film-coated tablets, achieving high accuracy without manual feature engineering (Deep learning-based defect detection in film-coated tablets using a convolutional neural network). CNNs can also learn to classify the type of defect (scratch, chip, discoloration) from images, which can help pinpoint process issues causing those defects (Real-time visual intelligence for defect detection in pharmaceutical packaging - Scientific Reports). Transfer learning is common – models like ResNet or DenseNet pretrained on large datasets can be fine-tuned on pharmaceutical images to recognize specific pill characteristics (Deep Learning and Computer Vision Techniques for Enhanced ...). The depth and filters of CNNs allow them to pick up subtle texture differences (e.g., a slight blister in a tablet's coating) that rule-based vision might miss.

  • Object Detection Algorithms: These algorithms identify and locate multiple objects (or regions of interest) in an image. In pharma QC, the "objects" might be tablets, defects, or packaging components. Single-stage detectors such as YOLOv5/v7/v8 and SSD (Single Shot Detector) are popular due to their speed – crucial for real-time inspection (Real-time visual intelligence for defect detection in pharmaceutical packaging - Scientific Reports) (Real-time visual intelligence for defect detection in pharmaceutical packaging - Scientific Reports). They can process images in a few milliseconds, allowing deployment on fast-moving lines. Two-stage detectors like Faster R-CNN offer high accuracy but are slower; nonetheless, they have been used in cases where speed is less critical or where detecting very fine defects might benefit from the two-stage approach. A comparative study on blister pack inspection showed YOLO models reaching ~96–97% accuracy, outperforming Faster R-CNN's ~89% on the same data (Real-time visual intelligence for defect detection in pharmaceutical packaging - Scientific Reports). Thus, recent implementations gravitate towards optimized YOLO models for tasks like detecting missing pills or identifying foreign particles on tablets in real-time. Customizations of detectors are also seen – e.g., the CBS-YOLOv8 model introduced coordination attention and bi-directional feature pyramids to YOLOv8's architecture specifically to better detect tiny tablet defects without slowing down inference (Real-time visual intelligence for defect detection in pharmaceutical packaging - Scientific Reports) (Real-time visual intelligence for defect detection in pharmaceutical packaging - Scientific Reports). Object detection is also used for multi-class problems such as simultaneously checking for multiple defect types: the model can draw bounding boxes around a tablet and label it as cracked, or around a blister cell and label it missing content, in one forward pass.

  • Semantic Segmentation: Segmentation provides pixel-level classification, which is useful for precise defect localization. In pharma, segmentation might be used to outline the exact region of a defect on a pill (e.g., a chip on the tablet edge) or to separate a pill from its background. For example, to measure the area of a discoloration spot on a tablet, a segmentation model (like U-Net or Mask R-CNN for instance segmentation) can be employed. Segmentation is also valuable in cases like detecting contaminants in liquid – by segmenting moving dark specks against a transparent background. While less commonly reported than detection/classification in pharma, segmentation has been explored in research contexts. Quan et al. (2022) proposed an end-to-end framework combining ResNet and DenseNet to segment tablet images and identify defects (Deep Learning and Computer Vision Techniques for Enhanced ...). In practice, segmentation may be integrated for tasks such as verifying if a capsule is fully filled: the model segments the capsule image into "capsule" vs "background vs fill" and checks the fill region. Given that segmentation models are computationally heavier, there is a trend to use detection or classification for most tasks unless pixel precision is needed.

  • Anomaly Detection and One-Class Classification: As mentioned, one major challenge in pharmaceutical defect inspection is the extreme imbalance – defects are very rare (e.g., 1 in 1000 or less), making it hard to train multi-class classifiers on all defect types. Anomaly detection methods address this by learning from only normal (non-defective) data. Techniques include autoencoders, variational autoencoders (VAEs), one-class SVMs on feature embeddings, or newer approaches like normalizing flows and contrastive learning. The system is trained to reconstruct or represent normal product images; during inspection, any image that deviates beyond a threshold is classified as defective. This approach has been successfully used for detecting anomalies on capsules and pills when defect examples are scarce (Industry Insights: Quality Inspection with AI Vision: Utilizing Synthetic Data for Testing and Training). Moreover, synthetic data generation (see below) is often combined with one-class methods to simulate plausible defects. Industrial vision software (e.g., Cognex VisionPro or Keyence IV) now includes unsupervised anomaly detection tools precisely for these use cases – allowing a vision system to "learn" what a good product looks like and then catch the unknown unknowns. A notable public resource, the MVTec AD dataset, includes pharmaceutical items (capsules) for benchmarking such algorithms (MVTec Anomaly Detection Dataset: MVTec Software). Researchers testing anomaly segmentation on MVTec's capsule images have achieved high detection rates for defects like cap deformation and contamination without any defective samples in training (Normal and abnormal images for the capsule and carpet included in ...).

  • Optical Character Recognition (OCR) and Verification: OCR in pharma uses both traditional pattern-matching and modern deep learning text recognition. Classic OCR might involve segmenting characters and comparing to a template font. However, newer methods treat text reading as an image-to-sequence problem, often using CNN+LSTM or Transformer-based models to directly output the text. These models are more robust to noise, perspective, and varied fonts. In packaging lines, OCR is deployed to read alphanumeric batch codes, expiry dates, manufacturing codes, and even 2D human-readable codes. The verification part (OCV) cross-checks the recognized text against expected values and checks print quality (ensuring every character is printed clearly). Vision systems also perform grading of pharmaceutical barcodes (like verifying contrast and format of DataMatrix codes according to ISO standards), which is a specialized form of OCR/OCV. An example from industry is using OCR to read the tiny date codes on blister foil or the lot number on an ampoule's crimp label – tasks well-suited to deep learning vision that can handle low contrast and curved text (How AI and Machine Vision Improve Pharmaceutical Product Quality and Yield - Cognex). These text-reading models are often optimized and deployed on edge devices to meet the line speed; for instance, reading a code in under 50 ms as the product passes by.

  • Traditional Image Processing Techniques: Not to be overlooked, many systems still incorporate classical vision algorithms for simpler subtasks or as fail-safes. These include image filtering, edge detection, color thresholding, and template matching. For example, checking if a pill is present in a blister can be as simple as detecting a color blob in the pocket area – a task achievable with thresholding if the color difference is stark. Many commercial systems combine these: they might use deep learning to handle complex judgments (like is this defect a crack or just a stain), but use simple algorithms for straightforward measurements (like is the label centered within X mm). Traditional methods are fast and transparent, which helps in validation. They are also useful in pre-processing for AI: e.g., using brightness normalization or glare reduction filters on images before feeding to a CNN, or using background subtraction to isolate moving particles in a vial video prior to classification. Thus, a hybrid approach is common: classical vision ensures certain measurable criteria, while AI provides the cognitive leap where needed (Amgens Deep Learning Approach To Vial Inspection).

  • Specialized Imaging: Although standard RGB cameras are the workhorse in vision systems, some applications require specialized imaging modalities. Infrared or UV cameras can reveal features not seen in visible light (for example, inspecting security ink or watermarks on packaging, or seeing through packaging layers). Hyperspectral imaging is an emerging tool in pharma QC to combine chemical identification with visual inspection (A Review of Pharmaceutical Robot based on Hyperspectral Technology - PMC). A hyperspectral camera can detect counterfeit drugs by their spectral signature or check active ingredient distribution in a pill by imaging beyond RGB (A Review of Pharmaceutical Robot based on Hyperspectral Technology - PMC). This bridges into analytical QC, but when integrated with robotized vision systems, it becomes a powerful method (e.g., a "pharmaceutical vision robot" that scans pills for both visual defects and chemical composition anomalies). X-ray imaging is used for inside-the-package inspection (for instance, verifying that an auto-injector pen has a needle or checking for glass shards inside a sealed vial). While not "computer vision" in the classic sense, the same image analysis techniques apply. These advanced imaging techniques often produce large amounts of data, and deep learning models are being developed to process hyperspectral or X-ray images for QC in real-time, though those are at earlier stages of adoption due to cost and complexity (A Review of Pharmaceutical Robot based on Hyperspectral Technology - PMC).

In practice, deploying these techniques in pharma manufacturing requires meeting strict validation and reliability criteria. Many vision solutions providers (Cognex, Keyence, Siemens, Zebra, etc.) offer packaged hardware/software that implement the above algorithms with user-friendly interfaces, so that engineers can configure inspections without coding from scratch. For example, Cognex In-Sight vision systems now include deep learning tools (previously Cognex ViDi) that can perform anomaly detection or image classification on-device, and Siemens SIMATIC MV smart cameras provide an easy integration of vision into PLC-controlled lines (How AI and Machine Vision Improve Pharmaceutical Product Quality and Yield - Cognex) (Siemens – Machine vision – Provendor Oy). These platforms often allow a mix of rule-based and AI tools, reflecting the best practices in pharmaceutical vision: use AI where it genuinely adds value, and keep conventional checks for well-defined features.

Below is a summary table of key vision tasks and their typical uses in pharmaceutical quality control, with example techniques:

Vision TaskDescription & Common TechniquesExample Application in Pharma QC (with source)
Object DetectionLocate and identify objects or regions (using models like YOLO, Faster R-CNN). Suitable for finding discrete items or defects in an image.Detecting missing or broken pills within blister pack cells (Real-time visual intelligence for defect detection in pharmaceutical packaging - Scientific Reports); locating particles or defects on a tablet surface during high-speed sorting.
Semantic SegmentationPixel-level classification of images (e.g., using U-Net, Mask R-CNN) to delineate regions of interest such as defects vs. background.Segmenting a tablet's surface to highlight a crack or chip for size measurement (Deep learning-based defect detection in film-coated tablets using a convolutional neural network); checking capsule fill by segmenting the capsule interior.
Anomaly DetectionUnsupervised or one-class detection of deviations from "normal" appearance (using autoencoders, one-class CNNs, etc.). Addresses scenarios with imbalanced data (few defect samples).Identifying unknown or rare defects on capsules when only normal examples were used in training (Industry Insights: Quality Inspection with AI Vision: Utilizing Synthetic Data for Testing and Training); detecting anomalous particulates in liquid vials that don't match known artifact patterns.
OCR/OCV (Text Reading)Optical Character Recognition to read text, and Optical Character Verification to check it against expected values. Modern systems use deep learning-based text recognition.Verifying lot numbers and expiration dates on labels and cartons at high speed (How AI and Machine Vision Improve Pharmaceutical Product Quality and Yield - Cognex); reading engraved or inkjet-printed codes on tablets for identification.
1D/2D Code ReadingDecoding barcodes (linear or DataMatrix/QR) and evaluating their print quality. Typically uses dedicated imaging algorithms optimized for codes.Tracking and logging serialized product codes on packaging lines for DSCSA compliance (Using Machine Vision to Meet New Pharma Compliance Rules); scanning barcodes on raw material containers to ensure correct ingredients (Categories of vision systems and the applications they perform - Cognex).
Classic Vision AlgorithmsRule-based image processing (edge detection, template matching, color thresholding, etc.) for simple, high-speed inspections or preprocessing.Measuring the length/width of capsules to ensure they meet specifications (using edge detection) (Siemens – Machine vision – Provendor Oy); detecting presence of a label by checking for a color region where the label should be.

Case Studies and Industrial Implementations (2020–2025)

The past few years have seen numerous real-world implementations of computer vision in pharma, as well as academic studies validating these approaches. Below we highlight several notable case studies and systems, illustrating the diversity of solutions:

  • AI-Powered Tablet Inspection at Scale: A pharmaceutical manufacturer deployed an AI vision system on their tablet production line to inspect printed logos on tablets. According to a Cognex report, the deep learning system could inspect hundreds of thousands of pills individually per day, identifying printing defects that older systems missed (How AI and Machine Vision Improve Pharmaceutical Product Quality and Yield - Cognex). The result was a reduction in batch waste – previously, false alarms led to whole batches being scrapped; the AI system's precision meant only truly defective tablets were rejected, cutting unnecessary waste (How AI and Machine Vision Improve Pharmaceutical Product Quality and Yield - Cognex). This deployment used Cognex's In-Sight D900 smart camera with an embedded deep learning engine, demonstrating how edge AI can meet pharma throughput requirements (How AI and Machine Vision Improve Pharmaceutical Product Quality and Yield - Cognex).

  • YOLOv8 for Blister Defect Detection (2024): Researchers in a 2024 Scientific Reports article developed a blister inspection model (CBS-YOLOv8) as mentioned earlier. This case study is notable for its technical depth: they augmented the YOLOv8 network with coordinate attention and other modules to better detect small defects. The model was trained on a custom dataset of blister pack images with various tablet defects and tested both on static images and live video (Real-time visual intelligence for defect detection in pharmaceutical packaging - Scientific Reports). It achieved a high detection rate (mAP ~97%) and real-time performance (~79 FPS) (Real-time visual intelligence for defect detection in pharmaceutical packaging - Scientific Reports). In practice, this means the system can be mounted over a production line conveyor and inspect every pack without slowing production. The authors compared their model against other object detectors on the same data (including Faster R-CNN, SSD, and earlier YOLO versions) and showed a clear improvement in accuracy and/or speed (Real-time visual intelligence for defect detection in pharmaceutical packaging - Scientific Reports). This kind of academic-industrial collaboration hints at next-generation inspection machines that incorporate bespoke deep learning models for specific products.

  • Deep Learning in Visual Inspection of Injectables (Amgen, 2025): Amgen, a major biopharmaceutical company, integrated deep learning into their automatic visual inspection for vials. Jorge Delgado (Amgen) reported that adding a DL model to their inspection line significantly reduced the false rejection rate caused by reflections and bubbles (Amgens Deep Learning Approach To Vial Inspection) (Amgens Deep Learning Approach To Vial Inspection). The deep learning model was trained to differentiate true particulate contaminants from look-alikes (like air bubbles, dust, scratches), which is traditionally difficult. This project was shared at the 2025 ISPE Aseptic Conference, indicating industry-wide interest in such solutions (Amgens Deep Learning Approach To Vial Inspection). Importantly, Amgen worked closely with regulators; Delgado noted that while regulatory uncertainty was initially a barrier, regulators have been supportive of AI when existing validation frameworks are followed (Amgens Deep Learning Approach To Vial Inspection). The FDA even released draft guidance in 2023/2024 on using AI in drug manufacturing, including visual inspection, to clarify expectations (Amgens Deep Learning Approach To Vial Inspection). Amgen's case demonstrates that AI vision can be adopted in a GMP environment, provided thorough testing, validation, and documentation are done. It also shows a trend: AI is used not necessarily to replace the entire traditional system, but to augment it, yielding a hybrid system that is more reliable than either alone (Amgens Deep Learning Approach To Vial Inspection) (Amgens Deep Learning Approach To Vial Inspection).

  • Merck's Deep Learning for Tablet Defects (2025): In an internal initiative at Merck & Co., researchers applied deep learning to inspect film-coated tablets (published in Int. J. Pharm., 2025). They addressed problems in detecting coating defects that were hard to capture with standard vision. Notably, a previous method achieved 99.7% accuracy but required tablets to be manually placed in a 3D-printed tray for imaging (Deep learning-based defect detection in film-coated tablets using a convolutional neural network) – a setup not feasible in real production due to alignment issues. The new approach using CNNs could work on tablets directly on the production line without such constraints, simplifying integration into manufacturing. While the exact performance numbers are proprietary, the study demonstrates industry interest in moving from "lab setups" to in-line solutions using deep learning. It underscores that deep learning can maintain high accuracy and reduce custom hardware or setup needs.

  • Continuous Learning Vision System (Solomon Solution, 2023): Solomon, a vision solution provider, implemented a system called SolVision AI for a tablet manufacturer (Tablet Inspection Using AI - SOLOMON 3D). The interesting aspect was the system's ability to perform continuous learning. As new defect types were observed (which might happen as production scales or raw materials change), the system could be incrementally trained to include those defects in its repertoire (Tablet Inspection Using AI - SOLOMON 3D). In practice, this was done by periodically feeding the model new images labeled by QC experts. Over time, the defect detection capability broadened, and the need for manual intervention dropped. The outcome was reported as increased yield and reduced operator fatigue (Tablet Inspection Using AI - SOLOMON 3D). This case hints at the future of adaptive manufacturing – vision systems that keep improving as they see more products.

  • Track-and-Trace Automation (2023): With serialization deadlines (e.g., DSCSA in the US by November 2023), many pharma companies installed or upgraded machine vision for packaging lines. A case in point: Zebra Technologies provided fixed industrial scanners and smart cameras to pharma packaging sites to handle the serialization requirements (Using Machine Vision to Meet New Pharma Compliance Rules) (Using Machine Vision to Meet New Pharma Compliance Rules). These systems not only read barcodes on each item but also verify the correctness of printed data, and integrate with databases to report the aggregation hierarchy. One implementation at a major pharmaceutical distributor involved using Zebra's cameras on conveyor lines to automatically scan and reconcile thousands of packages per hour, replacing a manual scan-and-verify process (Using Machine Vision to Meet New Pharma Compliance Rules) (Using Machine Vision to Meet New Pharma Compliance Rules). This ensured that every product leaving the warehouse had a verified digital record. The success of such implementations (compliance achieved without slowing operations) demonstrates that computer vision has become an enabling technology for regulatory compliance tasks in addition to quality defect detection.

  • Vision Systems by Industry Leaders: Prominent automation companies have developed specialized vision systems for pharma. Cognex offers the In-Sight and VisionPro Deep Learning suite, which has been used for tasks from tablet inspection to OCR on packaging (How AI and Machine Vision Improve Pharmaceutical Product Quality and Yield - Cognex) (How AI and Machine Vision Improve Pharmaceutical Product Quality and Yield - Cognex). Keyence provides vision systems like the CV-X series with built-in AI tools, tailored for pharmaceutical packaging lines (with features like 21 CFR Part 11 compliance for audit trails). Siemens has integrated vision solutions (especially after acquiring the startup Inspekto in 2022 (Siemens Acquires Automated Visual Inspection Startup)) that enable out-of-the-box anomaly detection for quality control. These often run on the Siemens Industrial Edge platform, meaning the vision processing can be done on local edge devices with connectivity to PLCs and MES (Manufacturing Execution Systems) (Inspekto visual quality inspection - Siemens Global). For example, Siemens' SIMATIC MV smart cameras can be set up to inspect packaging codes and artwork in pharma, interfacing directly with their automation controllers (Siemens – Machine vision – Provendor Oy). Another example is Syntegon (formerly Bosch Packaging) which makes high-speed inspection machines: their AIM series for vials leverages AI to inspect up to 400 vials/minute for particulates and cosmetic defects (Automated Visual Inspection - High Quality & Safety - Syntegon), showcasing a purpose-built industrial solution. These industry implementations prove that computer vision is not just a lab experiment but a mature technology being rolled out in GMP production environments.

  • Open Datasets and Benchmarks: While many pharma vision applications use proprietary data, there are open datasets spurring research. We mentioned MVTec AD's capsule dataset for anomaly detection. There is also the Medical Pills Detection Dataset by Ultralytics (a small proof-of-concept dataset) which, although limited, illustrates use cases like pill counting and identification (Medical Pills Dataset - Ultralytics YOLO Docs) (Medical Pills Dataset - Ultralytics YOLO Docs). Academic challenges such as the Deep Pharma challenge have encouraged algorithms for pill recognition (e.g., classifying pill type by imprint and color, which ties into verifying correct product packaging). The existence of these datasets and challenges helps attract computer vision researchers to pharma problems, yielding new ideas like specialized network architectures for tiny defect detection, or domain adaptation techniques to deal with varying lighting in production. Over time, we can expect more datasets to emerge (perhaps from consortia or regulatory sandboxes), particularly for critical tasks like injectable inspection, where robust algorithm benchmarking is needed.

Challenges and Considerations in Pharma Vision Systems

Implementing computer vision in pharmaceutical quality control comes with a set of unique challenges. These challenges stem both from the nature of pharmaceutical products and from the stringent regulatory environment. We discuss some of the major challenges and the strategies or considerations to address them:

  • Class Imbalance and Data Scarcity: Pharmaceutical production aims for extremely low defect rates (Six Sigma quality or better), meaning obtaining images of defects can be difficult. This leads to highly imbalanced datasets (thousands of good samples for every defective sample) for training supervised models. A model might become biased towards always predicting "good" since that's the majority of data. To mitigate this, techniques like oversampling of defect images, data augmentation (creating variations of the few defect images), or algorithmic solutions like anomaly detection (one-class learning) are employed. Another effective approach is using synthetic data generation. Companies have started generating photo-realistic images of products with simulated defects to enrich training datasets (Industry Insights: Quality Inspection with AI Vision: Utilizing Synthetic Data for Testing and Training). For example, CAD models of tablets can be rendered with various defect textures (chips, cracks) to produce synthetic training images (Industry Insights: Quality Inspection with AI Vision: Utilizing Synthetic Data for Testing and Training). This provides a more comprehensive set of examples without needing to physically produce defective pills. However, synthetic data must be used carefully – it should closely mimic real defects, and often a mix of real and synthetic data yields the best results (Industry Insights: Quality Inspection with AI Vision: Utilizing Synthetic Data for Testing and Training). Active learning is another strategy: the system is deployed with an initial model, then any uncertain or flagged images (especially false positives/negatives) are collected, and an expert labels them to continually improve the model. Overall, handling data imbalance requires a combination of technical and procedural solutions, ensuring that the AI doesn't become "blind" to rare but critical defects.

  • Variability in Appearance: Even good pharmaceutical products can have natural variability – tablets might vary slightly in color due to raw material lots, capsules might have minor printing offsets, lighting glare might change appearance across an image. A robust vision system must tolerate acceptable variability while still catching true defects. Achieving this requires careful training data curation (including samples from different batches, lighting conditions, machine settings) so the model learns the difference between benign variation and real anomalies. It also often requires extensive image preprocessing: controlling lighting in the vision station (using diffuse illumination, coaxial lights for reflective surfaces, etc.), using polarization filters to cut glare, and applying normalization techniques. For instance, transparent blister packs and shiny foil can cause reflections that confuse algorithms. One solution is to use polarized lighting and cameras to reduce reflection artifacts, combined with software that can still detect issues under these conditions. Environmental robustness is a big focus – as noted by one vision engineer, rule-based algorithms were "easily perturbed by changes in lighting or grease" (Industry Insights: Quality Inspection with AI Vision: Utilizing Synthetic Data for Testing and Training). Deep learning models are somewhat more tolerant if trained on diverse data, but they too can fail if a camera goes slightly out of focus or a light intensity drifts. Thus, maintaining consistent imaging conditions on the production line (with regular calibration) is an important practice. Furthermore, augmentation of training images (random brightness, contrast, small rotations, etc.) helps models generalize to slight variations encountered in practice.

  • Real-Time Performance Constraints: Pharmaceutical lines can be extremely fast. Blister packaging machines, for example, might produce hundreds of blister cards per minute, and each card might have 10–20 individual tablets to inspect. This translates to analyzing thousands of individual items per minute. Vision systems must keep up with this throughput. Latency is critical – if the inspection takes too long, it could force the line to slow down or cause products to miss rejection triggers. Therefore, performance optimization is a key consideration. Techniques used include deploying models on powerful edge computing devices (GPUs or FPGAs located on the line) (related: Cloud vs On-Premises IT in Pharma), model compression (using smaller networks or pruning and quantizing models), and parallelization (using multiple cameras/computers each inspecting a subset of items). The use of single-stage detectors like YOLO is partly motivated by speed; as noted, YOLO can run at dozens of frames per second with high accuracy (Real-time visual intelligence for defect detection in pharmaceutical packaging - Scientific Reports). In some cases, if ultra-fast inference is needed, engineers resort to custom FPGA implementations or ASICs that can process images in microseconds, albeit with simpler algorithms. Another tactic is pipeline parallelism: as one product's image is being processed, the next product is being imaged, and a buffer ensures decisions still correspond to the correct physical item for rejection. Designing the software and hardware architecture to meet real-time demands often requires collaboration between vision experts and controls engineers. Additionally, fail-safes must be in place — e.g., if the vision system falls behind or encounters an error, the line might trigger a stop, since letting products pass unchecked is not an option in pharmaceutical manufacturing. Real-time performance is not just about raw speed, but consistent, guaranteed processing times (deterministic behavior), which is why edge deployment is favored over cloud processing in this context.

  • Regulatory Compliance and Validation: The pharmaceutical industry is highly regulated (by FDA, EMA, etc.), and any system used in manufacturing, especially one that can affect product quality decisions, must be validated and compliant with regulations. Guidelines like GMP (Good Manufacturing Practice) and standards like 21 CFR Part 11 (which deals with electronic records and signatures) influence how vision systems are implemented. For instance, Part 11 requires that any electronic records (including inspection results) are stored securely with audit trails. Vision systems must therefore log results in a compliant manner, with time stamps, batch IDs, and audit trails for any manual overrides or re-inspections (How AI and Machine Vision Improve Pharmaceutical Product Quality and Yield - Cognex). Moreover, the algorithms themselves need validation. In a GMP validation, the system must be challenged with known good and bad samples to ensure it reliably identifies them, and these tests are documented. The concept of challenge sets is often used: a set of test images representing various defect scenarios is created, and the vision system is tested to verify it catches all above a certain threshold (e.g., all tablets with chips larger than X mm must be detected). Only if it passes is it allowed to operate on production. Regulatory auditors may review these validation documents to confirm that the AI system does what it's supposed to do. There is also a focus on change control: if the model is retrained or the inspection parameters are altered, it typically triggers a re-validation, similar to any process change. This can slow down iteration, so manufacturers must plan carefully when deploying learning systems. The industry is working on guidelines for validating AI – for example, the FDA's draft guidance on AI in manufacturing (2023) provides considerations specifically for AI-driven visual inspection systems (Amgens Deep Learning Approach To Vial Inspection). It emphasizes understanding the model's intended use, its risk (e.g., missing a critical defect), and ensuring continuous monitoring. Explainability is another consideration: while not strictly required to explain AI decisions, having some ability to justify why products were rejected (especially if a dispute arises) is valuable. Techniques like saliency maps or example-based explanations can be employed to interpret the model's focus (for instance, highlighting the image region that led to a rejection). In practice, many companies adopt a conservative approach: use AI to assist or flag, but have ultimate decisions be rule-based or human-reviewed, at least until confidence in the AI is extremely high. This layered approach can satisfy regulators that no uncontrolled "black box" is solely responsible for quality decisions (see also CAPA Dashboards for tracking actions). (Amgens Deep Learning Approach To Vial Inspection) (Amgens Deep Learning Approach To Vial Inspection).

  • Integration and Maintenance: A practical challenge is integrating vision systems into existing production lines and IT infrastructure. Vision equipment must often fit into tight physical spaces on legacy machinery. Retrofitting can require custom mounts, enclosures (to keep cameras sterile in cleanrooms or to protect from dust in facilities), and careful alignment with production timing. On the software side, the vision system needs to communicate with PLCs (Programmable Logic Controllers) for reject actuators or divert gates. Standards like OPC UA or proprietary protocols are used for this machine-to-machine communication. Ensuring low-latency, reliable comms is as important as the vision algorithm itself. Additionally, integration with Manufacturing Execution Systems (MES) or databases is needed to log results per batch or item. From a maintenance perspective, vision systems require upkeep: lenses need cleaning, lights may need replacement, and cameras re-calibrated. AI models might need re-calibration too if something in the process changes (e.g., a slight tweak in tablet color formulation might require updating the "good" reference distribution the model expects). There's also the human aspect: training operators or technicians to work with the system, interpret its output, and troubleshoot it. For instance, if an increase in rejections is observed, staff should know how to review the images and determine if the AI is correct (indicating a real process issue causing more defects) or if it's a false alarm (perhaps due to a lighting drift). Many companies address this by establishing a cross-functional team including quality engineers, vision specialists, and IT, to continuously monitor and improve the system post-deployment. In the pharma context, any anomaly even in the inspection system itself should trigger investigation (as per deviation management under GMP), so a robust maintenance and monitoring plan is crucial to keep the vision system performing reliably.

Despite these challenges, the trajectory of successes in recent years shows that they can be managed. Synthetic data and anomaly models tackle the data imbalance; faster hardware and optimized models tackle real-time needs; and regulators are increasingly open to AI as long as companies demonstrate control and understanding of their systems. As one industry expert put it, "regulatory agencies have expressed support for AI implementation, recognizing its potential to improve drug quality – provided existing regulations are adhered to" (Amgens Deep Learning Approach To Vial Inspection). This means the onus is on manufacturers to implement vision systems with the same rigor as any pharmaceutical process, which is achievable with current best practices.

Conclusion

Computer vision has emerged as a pivotal technology for enhancing pharmaceutical quality control across the production lifecycle. From verifying incoming materials to inspecting finished packages, CV systems equipped with deep learning can achieve levels of speed, accuracy, and consistency unattainable by manual inspection or older machine vision alone. We have seen that modern techniques like CNN-based defect detection, object detection (e.g., YOLO), and OCR are being successfully applied to ensure each pill, vial, and label meets the industry's high standards. Case studies in the 2020–2025 period demonstrate both feasibility and tangible benefits: higher yield, fewer false rejects, and improved assurance of patient safety.

For computer vision researchers, the pharmaceutical domain poses exciting challenges that push the boundaries of algorithm performance and reliability. Tasks such as detecting microscopic defects in real time, reading tiny characters on curved surfaces, or distinguishing defects from look-alike artifacts require innovative solutions in model design, data augmentation, and system engineering. At the same time, the stringent regulatory requirements demand that these solutions are interpretable, validated, and robust. This drives research into areas like explainable AI for vision and methodologies for validating learning-based systems – developments that will benefit not only pharma but industrial AI applications in general.

Looking ahead, we can expect even greater integration of computer vision in pharma manufacturing. Trends such as Pharma 4.0 envision fully connected "smart factories" where vision systems not only inspect but also feed data into control loops to adjust processes in real-time. For example, if a vision system detects an increasing trend of a certain tablet defect, it could alert the operator or automatically adjust a machine parameter (such as compression force in tablet presses) to correct it. The data collected by vision systems can also enable predictive maintenance (e.g., recognizing when a filling nozzle might be getting clogged by observing the fill level variations it causes). Additionally, advances in deep learning (transformer-based vision models, federated learning for data privacy, etc.) and imaging hardware (higher resolution, hyperspectral cameras) will open new application frontiers like chemical quality inspection and packaging provenance verification.

In conclusion, computer vision is playing an ever-expanding role in pharmaceutical quality control, turning what used to be predominantly manual operations into fast, automated, and intelligent processes. The combination of high-stakes requirements and rapid technological progress makes this field particularly dynamic. By continuing to bridge the gap between cutting-edge vision research and practical manufacturing needs, the industry can achieve the dual goals of better quality assurance and greater efficiency. This ultimately ensures that patients receive medicines of the highest quality while enabling producers to maintain compliance and reduce waste – a win-win outcome driven by the strategic application of computer vision in pharma.

Sources: The information and examples above were drawn from a range of recent publications and industry reports, including pharmaceutical engineering case studies (Amgens Deep Learning Approach To Vial Inspection) (Amgens Deep Learning Approach To Vial Inspection), vendor application notes (How AI and Machine Vision Improve Pharmaceutical Product Quality and Yield - Cognex) (How AI and Machine Vision Improve Pharmaceutical Product Quality and Yield - Cognex), and academic research on deep learning for pharmaceutical inspection (Real-time visual intelligence for defect detection in pharmaceutical packaging - Scientific Reports) (Real-time visual intelligence for defect detection in pharmaceutical packaging - Scientific Reports), as detailed in the inline citations. Each citation refers to the source document and line numbers for precise verification of the stated facts.

DISCLAIMER

The information contained in this document is provided for educational and informational purposes only. We make no representations or warranties of any kind, express or implied, about the completeness, accuracy, reliability, suitability, or availability of the information contained herein. Any reliance you place on such information is strictly at your own risk. In no event will IntuitionLabs.ai or its representatives be liable for any loss or damage including without limitation, indirect or consequential loss or damage, or any loss or damage whatsoever arising from the use of information presented in this document. This document may contain content generated with the assistance of artificial intelligence technologies. AI-generated content may contain errors, omissions, or inaccuracies. Readers are advised to independently verify any critical information before acting upon it. All product names, logos, brands, trademarks, and registered trademarks mentioned in this document are the property of their respective owners. All company, product, and service names used in this document are for identification purposes only. Use of these names, logos, trademarks, and brands does not imply endorsement by the respective trademark holders. IntuitionLabs.ai is an AI software development company specializing in helping life-science companies implement and leverage artificial intelligence solutions. Founded in 2023 by Adrien Laurent and based in San Jose, California. This document does not constitute professional or legal advice. For specific guidance related to your business needs, please consult with appropriate qualified professionals.