How to spot AI washing: Chris Moore, European President at Veeva Systems
The Tech Leaders Podcast
/@thetechleaderspodcast9836
Published: November 12, 2025
Insights
This segment, featuring Chris Moore, European President at Veeva Systems, provides a critical framework for distinguishing genuine Artificial Intelligence (AI) from superficial marketing claims, often termed "AI washing," particularly within the Software as a Service (SaaS) and life sciences sectors. The primary objective is to define a practical test for evaluating whether a system employs true AI or merely repackaged basic analytics. Moore establishes that the key differentiator is the level of autonomy or insight the solution provides, arguing that simply performing basic calculations or data visualization does not qualify as AI.
The analysis breaks down AI into two distinct, yet valuable, categories. The first bucket is Machine Learning (ML), which involves taking a vast amount of information, codifying it, and using it to generate a better, evolving answer. This approach, exemplified historically by IBM's Watson, is characterized by its ability to improve over time as it processes more data. Moore highlights the transformative role of ML in complex, data-heavy environments, specifically citing the EU Resist initiative. This program uses ML to assist in the diagnosis and treatment of HIV patients by determining the optimal drug combination for an individual, demonstrating how machines excel at continuous updating and refinement necessary for personalized medicine.
The second, and currently most rapidly evolving, bucket is Large Language Models (LLMs). These models represent a fundamentally different way of thinking about AI application. Unlike traditional ML, which often requires specific training on a targeted dataset, LLMs have already processed a "whole bolus" or "universe of information." This foundational knowledge allows users to ask direct, complex questions without the need for extensive pre-training specific to that query. While acknowledging the necessary intermediate step of implementing controls and guardrails, Moore posits that both ML (focused on codified data transformation) and LLMs (focused on generalized knowledge and conversational insight) meet the criteria for true AI, moving beyond simple data crunching toward sophisticated, autonomous insight generation.
Key Takeaways: • The Litmus Test for Real AI: The essential criterion for judging whether a technology is genuine AI is the level of autonomy or insight it provides, distinguishing it from standard computational analysis. • Avoiding "AI Washing": A common pitfall, particularly in the SaaS world, is marketing basic analytics—such as standard business intelligence or reporting—under the guise of AI, which the speaker identifies as a significant industry "bug bear." • The Machine Learning (ML) Bucket: ML represents the first major category of true AI, focusing on codifying vast amounts of information to generate continuously improving answers, often requiring specific training on the relevant data set. • Transformational ML in Life Sciences: The power of ML is demonstrated by initiatives like EU Resist, which aids in HIV patient care by optimizing drug combinations, illustrating how machines can manage and update complex medical data better than human systems. • The Large Language Model (LLM) Bucket: LLMs constitute the second, newer category of AI, characterized by their ability to leverage a pre-existing "universe of information," allowing them to answer questions directly without requiring specific, targeted pre-training. • Differentiation from Basic Analytics: If a system merely processes data and presents a static result based on pre-set rules, it is likely not true AI; genuine AI must demonstrate learning, adaptation, or autonomous insight generation. • The Need for Control in LLMs: While LLMs offer immense flexibility, an intermediate step involving the implementation of controls and constraints is necessary to ensure accuracy, safety, and regulatory compliance, particularly in regulated industries like life sciences. • Evolution of AI Thinking: The shift from ML (data codification) to LLMs (generalized knowledge) requires a completely different strategic approach to software development and deployment within technology leadership.
Key Concepts:
- AI Washing: The practice of mislabeling basic data analytics or simple computational features as Artificial Intelligence to capitalize on market hype and perceived technological sophistication.
- Level of Autonomy or Insight: The defining metric for true AI; the system must be able to generate novel conclusions, adapt its behavior, or operate without constant human intervention beyond basic data processing.
- Machine Learning (ML): A category of AI focused on training algorithms to learn patterns from data and make predictions or decisions without being explicitly programmed to perform the task.
- Large Language Models (LLMs): A category of generative AI that has been trained on massive text datasets, allowing it to understand, summarize, generate, and answer complex questions conversationally based on its generalized knowledge base.
Examples/Case Studies:
- IBM Watson: Cited as a historical example of the ML approach, focusing on codifying vast amounts of information to provide enhanced answers.
- EU Resist Initiative: A specific, transformative application of ML in the life sciences sector, helping diagnose and treat HIV patients by calculating the optimal drug combinations, demonstrating the machine’s superiority in continuously updating complex medical protocols.