In my previous post, I explored the European Commission guidelines and the EU AI Act, looking at how they influence AI applications and the validation strategies surrounding them. This time, I want to shift focus to the FDA’s draft guidance on AI, released in January 2025. Although still in draft form, the document is an important signal of how the FDA is approaching AI, and it helps us understand what kinds of validation methodologies are expected to align with global regulatory thinking.

The FDA’s paper highlights two main areas: a risk-based credibility assessment framework and the importance of lifecycle maintenance of AI models.
Risk-Based Credibility Assessment Framework
The credibility framework is designed to establish whether an AI model can be trusted within its intended context of use (COU). The FDA outlines a series of steps that organizations should follow:
- Define the questions of interest the AI model is intended to answer.
- Describe the COU in detail, including the role of the model, the type of output it generates, and its boundaries.
- Assess model risk by considering two factors:
- Model influence – how much the AI contributes to the decision.
- Decision consequence – the potential impact if the AI fails.
- Develop and execute a credibility plan, documenting how the model was developed, the datasets used, training processes, and performance results. Drop me a message on LinkedIn if you need a detailed credibility plan template.
- Maintain transparency through clear documentation of design choices, assumptions, limitations, and data sources.
- Review outcomes with the FDA to determine whether the model is adequate for its intended purpose.
The FDA places particular emphasis on the credibility plan, treating it as the foundation of validation. This plan provides regulators with a structured view of how the AI was built, tested, and proven reliable for the specific COU.
Lifecycle Maintenance
Validation, however, is not a one-time exercise. The FDA stresses that AI models need ongoing oversight to remain credible. Sponsors and organizations are expected to:
- Set up governance structures to monitor performance over time.
- Review model metrics regularly to confirm that the system is still fit for its COU.
- Revalidate the model whenever changes are introduced.
In short, the FDA expects AI systems to be treated as living tools that require continuous monitoring, not static products that can be certified once and forgotten.
Engagement with the FDA
The draft guidance also makes clear that the FDA encourages early and ongoing engagement with sponsors. By discussing credibility assessment activities upfront-tailored to the model’s risk and COU-organizations can align expectations with the agency, surface potential challenges early, and avoid costly surprises later in the regulatory process.
Alignment with EMA and Global Standards
It is worth noting that the FDA’s approach is not emerging in isolation. The guidance is broadly aligned with the EMA and the EU AI Act. All three frameworks emphasize similar principles:
- Validation should be risk-based.
- Thorough documentation is essential.
- Human oversight must remain central.
- Governance structures should be in place to review models regularly.
This convergence means organizations can design a single validation and governance strategy that is effective across multiple regulatory jurisdictions, rather than building region-specific approaches.
While still in draft form, the FDA guidance offers a clear roadmap for sponsors and developers of AI systems in healthcare and life sciences. Define the context, assess the risks, plan for credibility, maintain strong documentation, and treat validation as an ongoing responsibility. These are not just regulatory checkboxes-they are principles that help ensure AI systems remain trustworthy, safe, and effective throughout their lifecycle.
Note: You can review the full FDA guideline here. Considerations for the Use of Artificial Intelligence To Support Regulatory Decision-Making for Drug and Biological Products.
If you have any comments, feedback, or requests, please feel free to connect with me on LinkedIn. And if you liked this post, don’t forget to share it with your network!