AI GovernanceFinancial ServicesModel Risk ManagementComplianceAI Workflows

AI Governance Isn't a Compliance Project. It's an Operating Decision.

SC
Sean Cummings
·March 24, 2026·6 Min Read
AI Governance Isn't a Compliance Project. It's an Operating Decision.

Most financial services firms are building AI governance programs that look good on paper and collapse under operational weight. Here's what actually works.

AI Governance Isn't a Compliance Project. It's an Operating Decision.

Every mid-market financial services firm we talk to right now has the same problem. They've deployed AI — or they're close to it. Someone in risk or compliance has flagged it. Now there's a working group, a draft policy, and a steering committee that meets monthly.

None of that is governance. That's governance theater.

Real AI governance is an operating decision with teeth. It determines which AI use cases get built, in what order, with what controls, and what happens when something breaks. If your governance model doesn't constrain actual decisions — about deployment timelines, vendor selection, model updates — it's a document, not a system.

The industry is waking up to this distinction, slowly. Frameworks like SR 11-7 have governed traditional models for over a decade. But generative and agentic AI don't fit neatly into that box. The risk profile is different. The failure modes are different. And the pace of change means last quarter's deployment is already outdated.

So what do you actually do?

Start With Scope, Not Policy

The first mistake most firms make is writing a governance policy before they've made scope decisions. Do generative and agentic AI fall under your existing Model Risk Management framework? Or do they need a separate track while you figure that out?

This isn't a rhetorical question. It has operational consequences. If you route every GPT-powered workflow through your full MRM process, you'll kill the initiative before it ships. If you route nothing through MRM, your examiners will have questions.

The right answer for most mid-market firms: a tiered approach. High-stakes use cases — credit decisioning, fraud detection, customer-facing recommendations — get the full treatment. Lower-stakes internal automation gets a lighter-touch review with clear escalation triggers. The tier boundaries need to be defined explicitly, documented, and tested against real examples before you finalize them.

If your governance committee is still debating whether a loan origination co-pilot is "the same as a model," you're already behind.

Controls Embedded in the Lifecycle, Not Bolted On at the End

Here's the pattern we see constantly: a team builds an AI workflow, gets it to a working prototype, and then hands it to compliance for review. Compliance finds issues. The team has to rebuild. Months lost.

This happens because organizations treat AI controls like a final inspection rather than an integrated quality process. The fix isn't faster compliance review. It's embedding control checkpoints at every stage — scoping, data sourcing, model selection, testing, deployment, and ongoing monitoring.

Each stage should have a defined exit criterion. Not "compliance approved it" but specific, testable conditions: bias evaluation complete, data lineage documented, drift monitoring configured, incident response runbook written and reviewed.

This is what "automating compliance gates" actually means in practice. Not a software tool that magically handles it — a process design where controls are baked in, not added on. The documentation writes itself because the work was done right.

Monitoring Is Where Most Programs Fall Apart

Deployment is not the finish line. It's mile one.

AI models degrade. Inputs shift. User behavior changes in ways that expose edge cases your testing didn't catch. For financial services firms specifically, the regulatory environment itself is moving — what was acceptable last year may not be next year.

Most mid-market firms have no systematic monitoring in place for their AI workflows. They know the model is running. They don't know if it's still running well.

Minimum viable monitoring for a production AI workflow in a regulated environment includes: output sampling with human review on a defined cadence, performance metric tracking against baseline, an alert threshold that triggers escalation, and a clear process for what happens when escalation fires. If you can't describe all four of those things for each AI system you've deployed, you're flying blind.

The Practical Framework

If you're building or rebuilding AI governance at a mid-market financial institution, here's the sequence that actually works:

1. Make scope decisions first. Decide explicitly how generative and agentic AI map to your existing MRM framework. Write it down. Get sign-off from risk leadership.

2. Build a use case tier registry. Every AI use case gets categorized by risk level. Define what triggers each tier. Apply it consistently.

3. Map controls to lifecycle stages. For each tier, define what controls apply at each stage of development and deployment. Make them specific and testable.

4. Stand up monitoring before you ship. Non-negotiable. If you can't monitor it, you're not ready to deploy it.

5. Run a tabletop on incident response. Before your first production incident, walk your team through what happens when an AI system produces a bad output at scale. Who decides to pull it? How fast? What do you tell regulators?

Governance built this way doesn't slow innovation. It gives your teams a clear lane to move in. The firms that get this right will outpace their peers — not because they took more risk, but because they took the right risks with their eyes open.

That's the operating decision worth making.

Dealing with a similar challenge?

We work with mid-market companies in regulated industries to build AI workflows that actually hold up.

Let's Talk
SC

Sean Cummings

Founder of Laminar Flow Analytics. Specializes in AI workflow automation for regulated industries — medical device, financial services, and complex logistics operations.

← Back to all postsWork With Us