Medical DeviceFDAAI ValidationRegulatoryChange Control

FDA's New AI Rules Aren't Your Biggest Problem. Your Development Process Is.

SC
Sean Cummings
·April 21, 2026·6 Min Read
FDA's New AI Rules Aren't Your Biggest Problem. Your Development Process Is.

The medical device companies getting buried by FDA's evolving AI framework aren't failing on regulatory knowledge — they're failing because they bolted compliance onto a workflow that was never built to carry it.

The Companies That Are Struggling With FDA's AI Framework Have One Thing In Common

They built their AI workflow first and called the regulatory team second.

That sequencing problem is now showing up everywhere. FDA has sharpened its requirements around AI/ML-enabled medical devices — predetermined change control plans, software validation documentation, clinical evidence, cybersecurity — and a lot of mid-market device companies are realizing they have an AI system that works fine in a demo environment and a submission package that won't survive a real review.

This isn't a knowledge gap. The regulatory requirements, while evolving, aren't secret. The problem is structural.

What a Predetermined Change Control Plan Actually Demands

FDA's focus on Predetermined Change Control Plans (PCCPs) is significant, and most companies are underestimating what it actually asks of them.

A PCCP isn't just documentation of how your algorithm might change. It's a commitment to a defined, controlled, auditable process for managing those changes — before they happen. That means your development team, your QA function, your regulatory affairs people, and your clinical team all need to be operating from the same playbook at the same time.

For a lot of mid-market companies, that's not how any of those teams currently work. They operate in sequence, not in parallel. Engineering builds. QA reviews. Regulatory submits. Clinical evidence gets gathered when someone finally asks for it.

That linear workflow cannot produce a credible PCCP. FDA will see through a document that was reverse-engineered from a finished product.

Software Validation Is Getting Harder to Fake

FDA's expectations around software validation for AI/ML systems have also tightened. They want documentation showing your algorithm performs across the full range of real-world use cases — not just the conditions under which you tested it during development.

This is where a lot of AI systems that looked great internally start to fall apart. A model trained on data from three health systems in the Midwest does not automatically generalize to a hospital in rural Alabama or a clinic in South Florida. FDA is asking you to prove it does, or to define the boundaries of where it doesn't.

For mid-market companies without a deep clinical data operation, that's a real resource problem. You either need to build the evidence infrastructure early, or you need to scope your claims carefully enough that your existing evidence is sufficient. Both are legitimate strategies. Neither is cheap to retrofit.

The Trap Most Companies Fall Into

Here's what I see consistently: a device company builds a promising AI feature, runs a pilot that looks strong, and then hands the whole thing to regulatory affairs and says "get this cleared."

Regulatory affairs then has to build a PCCP for a system they didn't design, document validation for a process they didn't oversee, and put together clinical evidence for a use case that was never clinically scoped in the first place.

The result is either a weak submission, a lengthy back-and-forth with FDA, or a decision to pull back the AI feature entirely — which is expensive and demoralizing after however long the development took.

The companies that are actually succeeding here aren't smarter. They started with regulatory requirements as design constraints, not exit criteria.

What a Functional AI Development Process Looks Like In This Environment

If you're building AI/ML capability into a medical device — or trying to clear something you've already built — here's the framework that holds up:

1. Define your intended use with clinical specificity before you build. Not "AI-assisted diagnosis" — the actual clinical scenario, the patient population, the care setting, the decision being supported. That specificity drives everything downstream: training data requirements, validation scope, and PCCP parameters.

2. Treat the PCCP as a product roadmap, not a regulatory document. Your product team should own the PCCP logic. Regulatory affairs should pressure-test it. If the PCCP only lives in your regulatory binder, it won't survive a real algorithm update.

3. Build validation into your data pipeline from day one. Post-market performance monitoring isn't a Phase 3 problem. If you're not collecting the right data during deployment to support ongoing validation, you're setting yourself up for a compliance event when FDA eventually asks.

4. Make cybersecurity documentation a development artifact, not an audit artifact. FDA's cybersecurity expectations for AI/ML devices are part of the same integrated framework. The security architecture decisions your engineering team makes in month two will either support or undermine your submission in month eighteen.

The Bottom Line

FDA isn't trying to block AI innovation in medical devices. They're trying to ensure that AI-enabled devices actually do what manufacturers say they do, across the populations they claim to serve, and that they remain safe as the underlying algorithms change over time.

That's not an unreasonable ask. But it requires a development process that can produce that evidence — and most mid-market companies haven't built one yet.

The companies that will clear AI/ML devices efficiently in this regulatory environment aren't the ones with the most sophisticated models. They're the ones who designed their workflow around the regulatory outcome from the start.

If you're not sure whether your current development process can produce what FDA is going to ask for, that's the right question to be asking right now — not after you've built the system.

Dealing with a similar challenge?

We work with mid-market companies in regulated industries to build AI workflows that actually hold up.

Let's Talk
SC

Sean Cummings

Founder of Laminar Flow Analytics. Specializes in AI workflow automation for regulated industries — medical device, financial services, and complex logistics operations.

← Back to all postsWork With Us