Author: Elizabeth Lavin, Chief Quality Officer, Suvoda
Snapshot:
In January 2026, the U.S. Food and Drug Administration (FDA) and the European Medicines Agency (EMA) published ten guiding principles of good AI practice for drug development. This joint effort signals aligned expectations for how artificial intelligence should be viewed across the drug-development lifecycle.
For clinical trial technology vendors like Suvoda and for sponsors adopting AI-enabled tools, these principles offer helpful clarity. They reinforce that AI is no longer a novelty. It is a scientific tool that must be well governed, transparent, and reliable when used to support decisions that affect patients and regulatory evidence.
The ten principles set expectations for responsible, trustworthy AI use in drug development when used to generate or analyze evidence across the drug product life cycle. In practical terms, they emphasize*:
*Read the FDA and EMA’s exact language from the 10 principles here.
These principles do not create new laws. Instead, they clarify how regulators will evaluate AI when it is part of evidence used in regulatory decisions.
Sponsors have been exploring AI in clinical development for years, including study design, data review, risk prediction, and operational planning. Until now, there has been limited regulatory alignment on how this use should be governed.
These principles are important for many reasons, including:
For sponsors, this means defining the AI’s purpose clearly, documenting how it works, validating its performance appropriately, and being ready to explain it to regulators. Doing this early can build confidence and reduce friction later in the review process.
For partners like Suvoda, the principles reinforce the need to design AI capabilities with governance and transparency built in from the start. I suggest keeping in mind the need for:
Sponsors and regulators will expect traceability. They will want to understand where data comes from, how the model works, how outputs are generated and verified, and where humans remain involved. Explainability and auditability become core product capabilities.
A risk-based approach means validating AI in the context of its actual use. AI tools, like Sofia, Suvoda’s AI assistant, which support site user inquiries and access to patient data, need validation and monitoring that reflect the level of risk associated with those actions.
AI systems change over time as data changes. Ongoing monitoring for performance and reliability is essential.
Effective AI in clinical trials requires coordination between many functions, including but not limited to data science, clinical operations, quality, and regulatory teams, both within vendors and in partnership with sponsors.
These expectations align closely with how Suvoda approaches AI. It is treated as a carefully governed capability with inputs from cross-departmental subject matter experts designed to support better decision making while maintaining transparency and quality.
The FDA and EMA’s guiding principles mark an important step toward clearer regulatory expectations for AI in drug development. They encourage practical, well-documented, and risk-based development and use of AI that fits within existing quality standards and frameworks.
As more detailed guidance emerges, organizations that already align their tools and processes with these principles will be better positioned to use AI confidently in clinical evidence generation.
At Suvoda, our work with AI-assisted capabilities like Sofia reflects this approach. We use AI to improve insight and efficiency in clinical trials while keeping governance, explainability, and regulatory alignment at the forefront.
Elizabeth Lavin
Chief Quality Officer,
Suvoda