DEV Community

Cover image for How Governance and Compliance Saves The World... lolwut?
Ben Link
Ben Link

Posted on

How Governance and Compliance Saves The World... lolwut?

If this isn’t your first Adventure of Blink, you’re probably a little shocked by today’s title.

(And if you ARE new... welcome, and buckle up! You picked a fun one!)

Look, I don’t usually hold governance and compliance in high esteem. When these functions sprawl unchecked (and believe me, they excel at being unchecked!), they stop being safeguards and start becoming entropy accelerators. “Security theater.” Rubber-stamped checklists. Layers of paperwork: vulnerability reports, mitigation reports, exception reports, justification chains, custody trails, all breeding like rabbits in a SharePoint folder, causing the company to collapse under its own weight.

But You Said They Save The World

I did. And they might.

Because this time the threat isn’t a bad audit or a missed control... It’s AI.

And the coming AI arms race is about to crash head-first into the one group inside most companies that still has both the instinct and the mandate to pump the brakes. Governance teams can’t control AI in its current form... so they’ll resist it. And as the hype cycle inflates beyond reason, that resistance might be the only thing keeping your organization from leaping off the nearest AI cliff.

The Collision Course

Whether you're an AI fanboy or the kind of person who destroys your toaster with a baseball bat if it makes an unexpected noise, you've gotta admit we're in a gold-rush sort of scenario right now. Every tech news source is talking about AI and language models and MCP servers and tokens and context windows. AI takes your order at the fast-food drive-through window. LinkedIn is full of bots that drop by your posts to congratulate you on your insightful comment, cheapening the platform by posting inane drivel.

On the other hand, there's your Governance and Compliance teams. Throughout history, they've been consistently faithful to pump the organization's brakes on anything new or innovative.

Who's going to win?

Why AI Panics Governance (And Why They're Not Wrong)

For years, we've complained that Governance teams used the "Chicken Little" defense. They could show up in a senior exec briefing, sow a little panic that Agile or DevOps or Continous Deployment was "insecure", and derail months of work that had been done to initiate some organizational evolution.

Chicken Little: The Sky Is Falling

But THIS time... the risks might actually justify an increased level of caution, because on a societal level we are about to put something we don’t understand in charge of something we can’t undo.

XKCD explaining machine learning

AI breaks every control these teams rely on to keep a company from becoming a headline. Let's look at some examples:

AI is a black box with vibes.

Traditional systems have code you can inspect, logs you can audit, and behavior you can explain. AI systems do whatever they want, and then shrug in JSON (or TOON 🤣). Governance teams can’t work with “the model felt like it.”

There’s no clean chain of custody.

Data provenance is foundational to compliance. AI models, meanwhile, are built on training sets scraped from the internet, customer telemetry, or “whatever was lying around in the S3 bucket at the time.”
Try writing an audit finding for that.

Outputs aren’t deterministic.

Governance loves repeatability.
AI loves... interpretive dance.
You can run the same prompt twice and get two different answers. That breaks every expectation around reliability, traceability, and root-cause analysis.

The regulatory landscape is made of wet cardboard.

Governance teams like knowing the rules before the game starts. AI regulation, on the other hand, is being drafted in real time, with definitions like “systemic risk” and “high-impact AI” that shift every month. No one wants to sign off on tech that might be retroactively illegal.

AI can leak data without even realizing it.

Not maliciously. Just enthusiastically.
Models autocomplete their way into revealing sensitive data, mixing private and public context like a toddler making potions in the bathtub.

From a governance perspective, this isn’t “new tech.”
It’s a risk engine, wired directly into the company’s most critical systems, operated by people who can barely articulate what the system is doing.

Meanwhile, The Business Wants Magic

The wild part about this AI hype cycle is its amplitude. We aren't just excited, we're pushing all the chips in as fast as we can.

To the business, AI isn’t a black box. It’s the Productivity Fairy.

AI promises shortcuts through every hard problem.

  • Don’t understand your data? AI will summarize it.
  • Don’t have a process? AI will hallucinate one.
  • Don’t want to do the work? AI will happily pretend to.

Why bother with six months of analysis when a chatbot can spit out an answer that sounds confident?

Every vendor pitch is a siren song.

  • Slide decks are full of numbers like “312% ROI” and “automatically fixes 97% of defects.”
  • The demos are pure theater: hand-picked prompts, cherry-picked outputs, smoke, mirrors, and a little stage magic.
  • Executives see this and think, “Why don’t we have this?”

FOMO becomes a strategic priority.

No one wants to be “the company that missed the AI wave.”

  • Boards ask about AI readiness.
  • Competitors announce AI features that definitely don’t work yet.
  • Suddenly everything is “AI-enabled,” including things that absolutely should not be.

From a distance, the business looks energized, innovative, and bold. We're delivering faster than ever!
Up close, though, there's something terrifying happening:
a crowd sprinting toward the edge of a cliff, convinced they’re about to take flight.

That’s the moment governance walks in, holding a stop sign no one asked for... but everyone might desperately need.

Why Governance and Compliance Might Be the Heroes

Here’s where I think the plot twists...

Governance isn’t blocking everyone’s progress.
They’re buying us the one thing engineers almost never get in a tech arms race: time to think.

When you’re in build-at-all-costs mode, you stop noticing the terrain. You fixate on the target: features, deadlines, slide-deck promises, and the big picture disappears. Risk perception narrows. Assumptions harden. Everyone becomes so focused on shipping that no one stops to ask the question that actually matters:

“Should we be doing this at all?”

Jeff Goldblum in Jurassic Park as Ian Malcolm... your scientists were so preoccupied with whether they could that they didn't stop to ask if they should

And that’s exactly where your friends in Governance and Compliance step in.

They’re the only group in the company that’s trained to say “no” when everyone else is emotionally invested in a “yes.”

  • They don’t get caught up in the hype cycle.
  • They don’t care how shiny the demo was.
  • They aren’t impressed by a vendor promising “AI-powered transformation” in Q3.

Governance sees the cliff before the crowd does, because they’re the only ones looking down while everyone else is looking forward. Their “no” might be annoying, frustrating, bureaucratic; but in the AI gold rush, it might also be the only thing preventing the company from sprinting straight into a disaster it can’t unwind.

Sometimes the hero shows up wearing a cape... sometimes they show up holding a checklist.

And if you’re building with AI right now? You might actually want the checklist.

Good Governance in the AI World

That's not to say they should just plant their flag, say "no" and put their fingers in their ears. They're right to pump the brakes and make us work through things logically, but we don't want them to end up back in the stereotypical "red tape generator" mode.

We want "Governance" to equal "Guardrails".

So how can they help more effectively?

Model Registries and Documented Provenance

(Where did this thing come from?)

A model isn’t just a piece of software; it’s a snapshot of everything it has ever seen. Good governance insists on knowing:

  • who trained it,
  • on what data,
  • under what license,
  • and whether that dataset accidentally included the CEO’s GitHub from 2007.

No provenance, no production.

Risk-Tiering Based on Impact

Not all AI use cases are created equal.

A chatbot that helps rewrite internal documentation?
Low-risk.

An AI system deciding who gets a loan, a promotion, or medical attention?
That’s “stop the meeting, bring snacks, we’re going to be here a while” risk.

Good governance doesn’t treat every model like a nuclear reactor, but it knows which ones are.

Approval Workflows That Reflect Reality

  • No 47-step PDF signatures.
  • No “upload to SharePoint and wait six weeks.”

AI governance works when it’s fast enough to keep experimentation alive, but strong enough to stop something dangerous from shooting straight to production.

Think “guardrails,” not “traffic jam.”

Human-in-the-Loop Requirements

AI cannot be allowed to make irreversible decisions on its own.
Good governance makes sure a human reviews:

  • high-impact outputs,
  • strange outputs,
  • or anything that smells like it was written by a model having a weird day.

Humans stay accountable. AI stays supervised.

Continuous Monitoring and Drift Detection

This is the part governance teams will actually love: ongoing checks, logs, dashboards, and evaluations. Models change over time. So should the controls.

Good governance treats AI like a system that evolves, not a launch-once-and-forget feature.

Red-Teaming and Adversarial Evaluation

Every AI system needs someone whose job is to break it on purpose.

  • Prompt it into revealing sensitive information.
  • Stress it with weird edge cases.
  • Mess with inputs until something unhinged pops out.

If a model can be tricked by an intern with ten minutes and a mischievous streak, it shouldn’t be trusted to handle customer data.

What Could Go Wrong?

There are two potential vulnerabilities in our Governance and Compliance teams' heroism opportunity; their "Achilles' Heels", if you will. And it's ironic, but their own undoing might just be found in their biggest stereotypes!

Shadow AI

For decades, governance teams have fought against "Shadow IT" - unapproved systems that the organization created in order to get things done. Developers have had many years of practice in hiding things from Governance (heck, we've even celebrated Shadow IT here on the Adventures of Blink!)

So don't be surprised when we start to see AI solutions do the same thing:

  • Unapproved model APIs.
  • Teams quietly piping sensitive data into third-party services “just to see what happens.”

By the time governance hears about it, half the org is already doing unsupervised R&D with confidential information.

The Siren's Call

Governance might end up being their own worst enemy in another way: most of the governance and compliance paperwork that we've all complained about involves a lot of words. Guess what LLMs are great at generating?

There's going to be extreme temptation to have AI complete the governance and compliance paperwork... "hey GPT, write me a justification for this Risk Acceptance". And just like that, Governance has fallen to the Power of AI.

Aragorn, one by one they fell into darkness

How to Avoid the AI Cliff

The AI cliff isn’t a single moment, but a pattern: small shortcuts, ignored warnings, “temporary experiments,” vague assumptions, and a long list of “we’ll fix it later.”

You don’t fall all at once. You wander right up to the edge.

Avoiding the cliff isn't accomplished through heroics. You have to build habits. Here’s what it looks like in real life:

  • Slow Down Just Enough to Notice the Warning Signs. You don’t need to block innovation, just to pause long enough to ask:

    • What happens if this output is wrong?
    • What data are we feeding this?
    • Would I be comfortable explaining this system to a regulator? If the answer is “uhhh…,” you’re facing the cliff.
  • Treat AI Like an Intern, Not a Prophet. AI is helpful, fast, occasionally brilliant... and totally untrustworthy without supervision. Give it tasks that can be checked. Give it boundaries. Never give it the final decision on anything important.

  • Keep Humans Accountable. A model cannot take responsibility. A person can. Every AI-assisted workflow needs a clear owner: the person who signs their name under “I approve this” and actually means it.

  • Make Experiments Traceable. Shadow AI usage is how cliffs happen. People quietly plug systems into model APIs “just for testing,” and suddenly sensitive data is flowing places it shouldn’t. You don’t need centralization: you need visibility. A simple log of who’s experimenting with what will save you later.

  • Favor Guardrails Over Gates. Instead of blocking all AI until the 900-page policy is done, build lightweight constraints:

    • Allow low-risk AI tasks immediately.
    • Require review for medium-risk use cases.
    • Demand formal approval for high-impact automation.

Keep innovation alive, while also keeping the company alive.

  • Reward Good Questions, Not Just Fast Results. Culture determines how far people will run. If leadership only celebrates speed, teams will sprint right off the cliff. If leadership celebrates thoughtful decisions, responsible adoption becomes the default.

  • Ask the Only Question That Really Matters. Before you integrate an AI system anywhere, ask:

“What’s the worst-case scenario… and am I prepared to own it?”

If you can answer that honestly, you’re already safer than 90% of organizations right now!

The Real Takeaway

Governance isn’t perfect... It often goes too far. There's an unhealthy fixation on processes and documents that has served to estrange them from delivery teams.

But in the AI era, their caution may be the only thing that keeps the company from doing something catastrophically shortsighted.

If governance is the seatbelt, engineering’s job is to design a safer car, not rip the seatbelt out.

When Governance appears and puts a roadblock in the way of your AI initiative...

Try thanking them.

Top comments (0)