
The FDA's finalizing AI guidance that demands explainability, transparency, and real-world monitoring — and most mid-market device makers aren't remotely ready. Here's what that actually means for your workflows.
Every medical device company with an AI initiative right now is having some version of the same internal conversation: *Do we wait for final guidance, or do we move?*
Wrong question.
The FDA's evolving 2026 framework around AI-enabled devices isn't going to tell you something surprising. Explainable AI. Model transparency. Ongoing real-world performance monitoring. These aren't new concepts — they're table stakes that the agency is now making enforceable. The companies that are struggling aren't struggling because they lack information. They're struggling because their development workflows were never built to support these requirements in the first place.
That's the actual problem. And it's a workflow problem, not a compliance problem.
Let's be specific about what FDA's AI guidance is pointing toward:
Explainability means your model can't be a black box. If a clinician, auditor, or reviewer asks why the model produced a given output, you need a defensible answer. Not a statistical summary — a traceable, documented rationale.
Model transparency means your development process is visible. Training data provenance, version control, performance benchmarks across subpopulations — all of it needs to exist in a form regulators can actually review.
Real-world performance monitoring means the work doesn't end at clearance. You're expected to track how the model performs once it's in clinical use, detect drift, and have a documented process for responding when performance degrades.
None of this is technically impossible. But for most mid-market device companies, it requires rebuilding how they think about AI development from the ground up.
Large device manufacturers have regulatory affairs teams, MLOps infrastructure, and legal counsel who've been gaming out these scenarios for two years. Mid-market companies — the ones doing $50M to $500M in revenue — typically have none of that. What they have is:
These aren't hypothetical gaps. We see them constantly. And when FDA scrutiny increases, they become expensive gaps very quickly.
The specific failure mode I see most often: companies treat AI validation as a one-time event. They validate the model at a point in time, clear the device, and assume they're done. FDA's 2026 direction makes clear that's not how this works anymore. The model's performance in production is part of the regulatory picture. That means you need monitoring infrastructure *before* you go to market, not as an afterthought.
Here's the piece that gets overlooked in almost every early-stage conversation: what happens when the model changes?
AI models drift. Training data becomes stale. You discover a subpopulation where performance is weaker than your original validation data suggested. You want to retrain. Now you're in change control territory — and most quality management systems at mid-market companies are not set up to handle AI model updates with anything resembling efficiency.
This creates a brutal choice: move fast and accumulate regulatory risk, or move slow and watch your AI advantage evaporate. Neither is acceptable.
The answer is designing your change control process *for AI from the start* — not retrofitting your existing process onto a type of software it was never meant to handle. That means defining upfront which types of model changes require full revalidation, which can be handled through predetermined change control protocols, and what your monitoring thresholds are for triggering a review.
If you're a VP of Regulatory, a Quality Director, or a product leader at a mid-market device company, here's how I'd approach the next 90 days:
1. Audit your current AI documentation against FDA's explainability expectations. Can you produce a clear, auditor-ready explanation of how your model makes decisions? If not, that's your first gap.
2. Map your monitoring infrastructure. Do you have a mechanism to track model performance post-clearance? Who owns it? What's the escalation path when metrics degrade?
3. Review your QMS for AI change control. Your existing change control process was built for hardware and traditional software. Does it account for model retraining? Version management? Predetermined change control protocols?
4. Align your regulatory and development teams now. The worst time to discover a process gap is during a pre-submission meeting. The regulatory team needs to be in the room when AI architecture decisions are being made — not brought in at the end to bless a finished system.
The 2026 guidance isn't a finish line. It's a signal that FDA is committed to treating AI as a living system that requires ongoing oversight. The companies that build their processes around that reality now will have a real competitive advantage. The ones that wait will be retrofitting under pressure.
That's never a good place to be.
Dealing with a similar challenge?
We work with mid-market companies in regulated industries to build AI workflows that actually hold up.
Let's TalkSean Cummings
Founder of Laminar Flow Analytics. Specializes in AI workflow automation for regulated industries — medical device, financial services, and complex logistics operations.