Author: Elizabeth Lavin, Chief Quality Officer, Suvoda
Snapshot:
- FDA and EMA align on good AI practice in drug development, setting shared expectations for governance, validation, and transparency.
- Sponsors must define, document, and validate AI use clearly, treating AI outputs as regulated evidence subject to quality standards.
- Clinical trial technology vendors must embed explainability and oversight, building AI tools that support compliant, trustworthy decision-making.
In January 2026, the U.S. Food and Drug Administration (FDA) and the European Medicines Agency (EMA) published ten guiding principles of good AI practice for drug development. This joint effort signals aligned expectations for how artificial intelligence should be viewed across the drug-development lifecycle.
For clinical trial technology vendors like Suvoda and for sponsors adopting AI-enabled tools, these principles offer helpful clarity. They reinforce that AI is no longer a novelty. It is a scientific tool that must be well governed, transparent, and reliable when used to support decisions that affect patients and regulatory evidence.
What the guiding principles are—in practical terms
The ten principles set expectations for responsible, trustworthy AI use in drug development when used to generate or analyze evidence across the drug product life cycle. In practical terms, they emphasize*:
- Human-centric by design
AI should support ethical, human decision-making, not replace it.
- Risk-based oversight
The level of validation and monitoring should match the potential impact of the AI system.
- Alignment with existing standards
AI systems should follow the same legal, ethical, technical, cybersecurity, and GxP standards as other regulated tools.
- Clear context of use
The intended purpose of the AI must be well defined within existing workflows.
- Multidisciplinary expertise
Teams should combine AI, clinical, regulatory, and operational knowledge when using AI in drug development.
- Strong data governance
Data sources, processing steps, and decisions must be documented and traceable.
- Thoughtful model design
Models should be robust, explainable, and built on data that is fit for purpose.
- Performance evaluation
Assessment should consider both technical accuracy and how humans interact with the system.
- Lifecycle management
AI systems require ongoing monitoring to detect issues that may arise over time.
- Clear communication
Outputs, limitations, and risks must be clear and understandable to sponsors, clinicians, and regulators.
*Read the FDA and EMA’s exact language from the 10 principles here.
These principles do not create new laws. Instead, they clarify how regulators will evaluate AI when it is part of evidence used in regulatory decisions.
Why these principles matter
Sponsors have been exploring AI in clinical development for years, including study design, data review, risk prediction, and operational planning. Until now, there has been limited regulatory alignment on how this use should be governed.
These principles are important for many reasons, including:
- Consistency across regions: Shared FDA and EMA expectations reduce uncertainty for global programs.
- Clear regulatory expectations: AI outputs must meet familiar standards for documentation, transparency, and risk management.
- Preparation for future guidance: These principles may shape more detailed guidance in the coming years.
For sponsors, this means defining the AI’s purpose clearly, documenting how it works, validating its performance appropriately, and being ready to explain it to regulators. Doing this early can build confidence and reduce friction later in the review process.
Implications for clinical trial technology vendors
For partners like Suvoda, the principles reinforce the need to design AI capabilities with governance and transparency built in from the start. I suggest keeping in mind the need for:
1. Documented, explainable AI workflows
Sponsors and regulators will expect traceability. They will want to understand where data comes from, how the model works, how outputs are generated and verified, and where humans remain involved. Explainability and auditability become core product capabilities.
2. Built-in governance and risk controls
A risk-based approach means validating AI in the context of its actual use. AI tools, like Sofia, Suvoda’s AI assistant, which support site user inquiries and access to patient data, need validation and monitoring that reflect the level of risk associated with those actions.
3. Lifecycle focus over one-time validation
AI systems change over time as data changes. Ongoing monitoring for performance and reliability is essential.
4. Multidisciplinary collaboration
Effective AI in clinical trials requires coordination between many functions, including but not limited to data science, clinical operations, quality, and regulatory teams, both within vendors and in partnership with sponsors.
These expectations align closely with how Suvoda approaches AI. It is treated as a carefully governed capability with inputs from cross-departmental subject matter experts designed to support better decision making while maintaining transparency and quality.
Looking ahead
The FDA and EMA’s guiding principles mark an important step toward clearer regulatory expectations for AI in drug development. They encourage practical, well-documented, and risk-based development and use of AI that fits within existing quality standards and frameworks.
As more detailed guidance emerges, organizations that already align their tools and processes with these principles will be better positioned to use AI confidently in clinical evidence generation.
At Suvoda, our work with AI-assisted capabilities like Sofia reflects this approach. We use AI to improve insight and efficiency in clinical trials while keeping governance, explainability, and regulatory alignment at the forefront.
Authors

Elizabeth Lavin
Chief Quality Officer,
Suvoda