Medical DeviceFDA ComplianceAI WorkflowSaMDRegulated AI

The FDA's 2026 AI Framework Isn't a Compliance Problem. It's an Operational One.

SC
Sean Cummings
·April 20, 2026·6 Min Read
The FDA's 2026 AI Framework Isn't a Compliance Problem. It's an Operational One.

Most medical device companies are treating FDA's new AI requirements as a regulatory checkbox. That's exactly why they'll fail. Here's what the framework actually demands — and what it means for how you build.

The FDA's 2026 AI Framework Isn't a Compliance Problem. It's an Operational One.

Every medical device company with an AI product in development right now is having some version of the same conversation: *When do we loop in regulatory?*

Wrong question.

The FDA's 2026 AI framework — covering Predetermined Change Control Plans (PCCPs), real-world performance monitoring, and explainable AI requirements — isn't something you hand off to your regulatory affairs team at the end of the development cycle. It's a set of constraints that have to be designed into the workflow from day one. If you're treating it otherwise, you're building a product that will either stall at submission or break in the field.

Let me be direct about what's actually new here — and what it means for how you operate.

Explainability Isn't a Feature. It's Architecture.

For Class II AI devices — which covers the majority of clinical decision support tools — the FDA now expects both global and instance-level explainability. Global means you can describe how your model generally behaves. Instance-level means the system can explain *this specific prediction, for this specific patient, right now*.

That's not a documentation task. That's an architectural decision you have to make before you write the first line of model code. If your team built a black-box model and is now trying to retrofit explainability on top of it, you're in for an expensive rebuild — or a submission that doesn't get through.

The companies that are positioned well right now made the call early: explainability-compatible model architecture, clinical validation of the explanation outputs, and documentation that connects the two. That's a development philosophy, not a compliance add-on.

PCCPs Sound Flexible. They're Actually More Work.

Predetermined Change Control Plans were supposed to give AI device makers a way to update their models post-clearance without going back to the FDA every time. In practice, they shift the burden forward — you have to specify in advance exactly what kinds of changes you're allowed to make, under what conditions, with what monitoring in place.

For most mid-market companies, this is harder than it sounds. It requires your clinical, regulatory, and engineering teams to agree — before launch — on a change taxonomy, performance thresholds, and monitoring triggers. That's a cross-functional alignment challenge as much as it is a technical one.

If your teams don't have a shared operating model for how decisions get made post-clearance, a PCCP becomes either a straitjacket (too narrow) or a liability (too vague). Neither gets you where you want to go.

Pre-Sub Meetings Are Not Optional

The FDA's Pre-Submission meeting process exists for a reason. For AI devices, a well-prepared Pre-Sub meeting should address your classification and regulatory pathway, your PCCP strategy, your clinical evidence plan, your XAI approach, and your post-market monitoring design — all at once.

Most companies go into Pre-Sub meetings underprepared. They haven't aligned internally on these questions, so they're effectively using the FDA as a sounding board for unresolved internal debates. That burns goodwill, extends timelines, and sometimes surfaces problems late enough to cause real damage.

The companies that use Pre-Sub effectively treat it like a dress rehearsal for submission. They've already made the hard calls internally. They're using the meeting to confirm alignment with FDA — not to figure out what they think.

The Real Operational Challenge

Here's the uncomfortable truth: the FDA's framework is demanding something most mid-market medical device companies don't have — a functioning operating model for AI that spans development, clinical, regulatory, and post-market functions.

Not a committee. Not a policy. An actual workflow where the right people are making the right decisions at the right time, with documentation that holds up under scrutiny.

The companies struggling with the 2026 framework aren't struggling because the requirements are too hard. They're struggling because they built their AI capability in a silo — usually inside an engineering or data science team — and now they're trying to bolt on regulatory infrastructure after the fact.

That's not a technical problem. It's an organizational one.

What You Should Actually Do

If you're building or scaling an AI-enabled medical device, here's the practical framework:

1. Make the architecture decision before you make the model decision. Explainability requirements should constrain your model selection, not the other way around. If you can't explain it at the instance level, you can't clear it.

2. Write your PCCP before you go to market, not after. It forces the cross-functional alignment you need anyway — and it means you're not scrambling post-clearance every time you want to update the model.

3. Treat Pre-Sub as an internal milestone, not an external meeting. The discipline required to prepare for Pre-Sub — resolving your regulatory pathway, your evidence plan, your monitoring approach — is the same discipline you need to build a product that survives in the field.

4. Build a post-market monitoring function before you need it. Real-world performance monitoring isn't optional under the 2026 framework. If you don't have the infrastructure to detect model drift and act on it, you're not cleared — you're just not caught yet.

The FDA isn't asking for anything unreasonable. They're asking that you build AI the way any serious operator would build a high-stakes production system: with explainability, change control, and ongoing monitoring built in.

The companies that understand this — and build accordingly — will move faster in the long run. The ones treating it as a compliance tax will keep hitting walls they could have avoided.

Dealing with a similar challenge?

We work with mid-market companies in regulated industries to build AI workflows that actually hold up.

Let's Talk
SC

Sean Cummings

Founder of Laminar Flow Analytics. Specializes in AI workflow automation for regulated industries — medical device, financial services, and complex logistics operations.

← Back to all postsWork With Us