
FDA's evolving AI/ML SaMD framework isn't a checklist you complete at launch. It's a continuous obligation — and most mid-market device companies aren't built for that.
Most medical device companies treat FDA clearance like a finish line. You build the thing, you document the thing, you submit the thing, you get cleared — and then you move on.
That model is already outdated for traditional devices. For AI/ML-based Software as a Medical Device, it's a liability.
FDA's evolving framework makes one thing unmistakably clear: an AI/ML SaMD is never really done. The algorithm learns. The data shifts. The clinical environment changes. And regulators expect you to be watching all of it, continuously, with documented processes to match.
If your quality system was built around a static product, you have a gap. And that gap gets more expensive the longer you wait to close it.
The regulatory trajectory here isn't subtle. Starting with the 2019 proposed framework, through the 2021 Action Plan, and into the 2025 draft guidance, FDA has been building toward a Total Product Lifecycle (TPLC) model for AI/ML devices.
What that means in practice:
Predetermined Change Control Plans (PCCPs) — Before you update your algorithm, you need to have documented, in advance, the types of changes you're permitted to make without going back for new clearance. If you don't have a PCCP in your submission, every meaningful model update is potentially a new 510(k) trigger. That's not a hypothetical. That's the rule.
Postmarket performance monitoring — FDA expects you to track real-world algorithm performance over time. Not just adverse event reporting. Actual ongoing monitoring that would surface model drift, population shift, or degraded performance before it becomes a patient safety issue.
Transparency and bias documentation — The 2025 draft guidance gets specific about documenting training data, known limitations, and bias mitigation approaches. This isn't academic. If your training data skewed toward a specific demographic and your device performs differently across populations, FDA wants to know you knew that — and managed it.
None of this is new in spirit. FDA has always cared about post-market surveillance. What's new is the expectation that AI/ML products have *additional* infrastructure layered on top of standard QMS requirements.
Large device companies have regulatory affairs departments with dedicated SaMD teams. They have change control boards, postmarket surveillance coordinators, and data science teams that can instrument an algorithm for monitoring.
Mid-market companies — say, $20M to $300M in revenue — usually have one or two regulatory affairs people who are already stretched across the full product portfolio. Adding continuous AI/ML compliance obligations on top of that without changing the workflow is how you end up with a documentation backlog that surfaces at exactly the wrong time.
The operational reality: most mid-market device companies are trying to implement AI to *reduce* manual burden. But the FDA framework for AI/ML SaMD *adds* compliance surface area that someone has to manage. You have to staff for both.
The trap is thinking you can bolt ongoing monitoring onto your existing QMS as an afterthought. You can't. Postmarket AI surveillance requires different data pipelines, different review cadences, and different escalation triggers than traditional adverse event monitoring. If those aren't designed in from the start, you'll be retrofitting under pressure.
Companies that are doing this well share a few common traits:
They treat the PCCP as a product design artifact, not a regulatory checkbox. The best PCCPs are written in collaboration between regulatory, clinical, and engineering teams — before development is complete. They force a useful conversation about which algorithm changes are clinically meaningful and which are routine optimization. That conversation should be happening anyway.
They build monitoring into the deployment architecture, not into a spreadsheet. Performance thresholds, drift alerts, subgroup analysis — these need to be automated to the extent possible, with human review triggered by anomalies. A quarterly manual audit of model performance isn't sufficient at scale.
They have a documented escalation path. When monitoring surfaces a potential issue, who decides if it's a reportable event? Who decides if it triggers a PCCP deviation? That decision tree needs to exist before you need it.
If you're a mid-market medical device company with an AI/ML component in your product — or planning to add one — run this quick diagnostic:
1. Does your current QMS have explicit procedures for AI/ML model updates and change assessment?
2. Do you have a PCCP in your most recent submission, or a plan to include one?
3. Is your postmarket surveillance process actually capable of detecting algorithm drift — or is it just adverse event collection with 'AI' written on the form?
4. Do your regulatory and engineering teams have a shared definition of what constitutes a significant versus non-significant algorithm change?
If you can't answer yes to at least three of these, you're carrying more regulatory risk than your leadership team probably realizes.
FDA isn't waiting for the industry to catch up. The 2025 draft guidance signals tighter expectations ahead. The companies that build compliant AI infrastructure now won't just pass audits — they'll move faster, because they won't be retrofitting every time the rules tighten.
Clearing your device was never the finish line. For AI/ML SaMD, it's barely the starting gun.
Dealing with a similar challenge?
We work with mid-market companies in regulated industries to build AI workflows that actually hold up.
Let's TalkSean Cummings
Founder of Laminar Flow Analytics. Specializes in AI workflow automation for regulated industries — medical device, financial services, and complex logistics operations.