The FDA's 2025 draft guidance on AI-enabled devices buried a major compliance shift inside a document that doesn't even say 'human factors' in the title. Here's what it actually means for your validation strategy.
In January 2025, the FDA released draft guidance on AI-enabled device software functions. The title is bureaucratic and forgettable. The content is not.
Buried inside a lifecycle management document is a set of expectations that will fundamentally change how mid-market medical device manufacturers have to think about AI validation — specifically around usability, cognitive risk, and what the agency is calling Human-AI team performance.
If your regulatory team hasn't flagged this yet, they need to. And if your AI development process still treats human factors as a box to check at the end of a project, you have a real problem coming.
The FDA isn't just asking whether your AI algorithm performs well on a test dataset. They're asking whether clinicians and operators can actually use it correctly under real-world conditions — and whether you can prove it.
The draft guidance introduces explicit expectations around:
None of this is entirely new territory for human factors engineering. What's new is the FDA making it explicit, required, and tied to marketing submissions for AI-enabled SaMD.
Large device companies have dedicated human factors teams and years of institutional practice building usability files. Mid-market manufacturers typically don't.
What they usually have is a small regulatory team stretched across multiple submissions, a contract RA consultant who comes in for specific projects, and an engineering or data science team that built the AI workflow without a structured usability process.
That combination worked reasonably well when human factors requirements were predictable and well-understood. It's going to struggle when the FDA starts asking nuanced questions about cognitive load, transparency, and team-level performance validation for AI systems.
The risk isn't just a rejected submission. It's a submission that gets held in review for six months while you try to retrofit a usability engineering file onto a system that was never designed with those requirements in mind. That's expensive, slow, and demoralizing for your team.
Here's the thing most manufacturers get wrong: they treat human factors as a documentation exercise. Write the use specification. Run the summative study. File the report. Done.
For AI-enabled devices, that approach will not hold up. The FDA's new expectations are asking you to demonstrate that you thought about human-AI interaction throughout the development process — not just at the end when you're trying to close out your design history file.
Cognitive risk doesn't emerge when you write a document about it. It gets baked into the system when your model surfaces uncertain predictions without adequate confidence indicators. It gets baked in when your interface design trains users to accept AI recommendations without meaningful review. By the time you're running a summative usability study, those problems are expensive to fix.
The manufacturers who will navigate this guidance well are the ones who integrate human factors thinking into their AI development cycle from the start — during model design, during interface development, during internal testing, not just in a pre-submission validation study.
If you're developing or already operating an AI-enabled device, here's a practical framework for the next 90 days:
1. Audit your current usability process against the draft guidance. Line up your existing human factors documentation against the cognitive risk and transparency expectations in the January 2025 draft. Find the gaps before the FDA does.
2. Map your Human-AI interaction points explicitly. For every point in your clinical workflow where a human acts on or in response to AI output, document what the AI is communicating, how it's presented, and what failure modes that interaction creates.
3. Stress-test your transparency claims. If your submission will assert that the AI's outputs are interpretable or explainable, make sure you can actually demonstrate that with end users — not just describe it in a design document.
4. Involve your regulatory team in AI design reviews now. Not when you're ready to file. Now. The questions they'll need to answer in a submission review need to inform decisions that are being made in development.
5. Don't wait for the guidance to finalize. Draft guidance signals direction. The expectation is already shifting. Build your validation strategy around where this is going, not where the last cleared submission got through.
The FDA is not trying to make AI development impossible for medical device manufacturers. They're trying to ensure that AI systems deployed in clinical settings actually work safely when real humans use them under real conditions.
That's a reasonable ask. But meeting it requires a fundamentally different approach to how you build and validate AI workflows — one that most mid-market manufacturers haven't built yet.
The window to get ahead of this is now, before your next submission is in review.
Dealing with a similar challenge?
We work with mid-market companies in regulated industries to build AI workflows that actually hold up.
Let's TalkSean Cummings
Founder of Laminar Flow Analytics. Specializes in AI workflow automation for regulated industries — medical device, financial services, and complex logistics operations.