AI ImplementationRegulated IndustriesWorkflow AutomationChange ManagementMid-Market

Your AI Project Didn't Fail Because of the Model. It Failed Because of What Was Under It.

SC
Sean Cummings
·April 28, 2026·6 Min Read
Your AI Project Didn't Fail Because of the Model. It Failed Because of What Was Under It.

MIT research says 95% of enterprise AI projects don't deliver ROI. In regulated industries, the number feels even higher — and the reason isn't the technology.

The Failure Rate Nobody Wants to Talk About

MIT's NANDA Initiative put a number on what a lot of operators already know in their gut: 95% of enterprise generative AI projects fail to deliver measurable ROI.

Read that again. Ninety-five percent.

And before you assume this is a big-enterprise problem — bloated IT budgets, misaligned strategy, politics — let me tell you what we see in the mid-market. The failure pattern is identical. It just happens faster and costs more as a percentage of budget.

The companies that got burned didn't buy the wrong tool. They built on the wrong foundation.

What "Foundation" Actually Means in a Regulated Business

In the AI vendor world, "data foundation" sounds abstract. In a medical device company, a regional bank, or a specialty manufacturer, it's painfully concrete.

Your data lives in at least a dozen places. An ERP that hasn't been properly cleaned since 2019. A QMS that stores documents as PDFs with inconsistent naming conventions. A CRM that three different sales teams have been using three different ways. Shared drives with version-controlled files that aren't actually version-controlled.

Now add the compliance layer. In FDA-regulated environments, your data has provenance requirements. In financial services, your records have retention and auditability obligations. In legal and professional services, your documents carry privilege considerations.

When you drop an AI workflow on top of that — even a well-designed one — it doesn't find clean, connected, queryable knowledge. It finds a mess. And AI amplifies the mess.

That's not a model problem. That's a foundation problem.

The Move Everyone Skips

Here's what the companies that actually get ROI do differently: they treat the data preparation and contextualization work as the project, not as a prerequisite someone else handles.

Most mid-market AI rollouts follow the same flawed sequence. A vendor demo goes well. A pilot gets approved. The implementation starts. Then someone notices the AI is hallucinating on product specs, or pulling from outdated SOPs, or generating outputs that can't be traced back to a source document. At that point, the conversation turns to the model — wrong vendor, wrong architecture, needs retraining. The real problem, the messy, disconnected, uncontextualized data sitting underneath, never gets addressed.

The winners invert this. They spend the first phase — sometimes 60 to 90 days — doing what looks like boring infrastructure work. Mapping where knowledge actually lives. Identifying which data sources are authoritative versus duplicative. Building the connective tissue between systems so the AI can reason across them, not just inside one silo.

In regulated industries, this phase has a second benefit: it doubles as the documentation your compliance team will eventually require anyway.

What This Looks Like in Practice

Take a mid-market medical device manufacturer running a pilot on AI-assisted complaint handling. The goal: reduce the time it takes to classify complaints and draft initial CAPA documentation.

The naive implementation pulls from the complaint management system and nothing else. The AI produces drafts, but reviewers flag them constantly — the outputs don't reflect current device iterations, don't cross-reference the right version of the risk management file, and can't be tied to specific lots. The pilot gets shelved.

The right implementation starts by asking: what does a human expert actually consult when they handle a complaint? The complaint record, yes — but also the DHF, the relevant design verification test results, the current approved labeling, and recent field feedback. Build the foundation that connects those sources with proper versioning and audit trail. Now your AI has what a human expert has. The outputs are traceable. The compliance team can live with them.

Same principle applies in financial services (loan underwriting AI that can't see the full credit policy), manufacturing (predictive maintenance tools disconnected from change control history), and legal (contract review AI that doesn't know which clauses your firm has already negotiated away in precedent deals).

The Practical Framework

Before your next AI initiative goes into scoping, run it through three questions:

1. What would a senior human expert actually consult to do this task? List every source — systems, documents, institutional knowledge. That's your required data foundation.

2. Which of those sources are clean, current, and authoritative? Be honest. A source that's 70% right is worse than no source for a regulated output.

3. What's the audit and provenance requirement for the output? Work backward from what compliance needs to see. Build traceability in from day one, not as a retrofit.

The model is the easy part. It's a commodity now. The foundation is where the real work — and the real ROI — lives.

If your AI project stalled or failed, don't start by questioning the vendor. Start by asking what the AI was actually working with. Nine times out of ten, that's where you'll find the answer.

Dealing with a similar challenge?

We work with mid-market companies in regulated industries to build AI workflows that actually hold up.

Let's Talk
SC

Sean Cummings

Founder of Laminar Flow Analytics. Specializes in AI workflow automation for regulated industries — medical device, financial services, and complex logistics operations.

← Back to all postsWork With Us