DEV Community

Cover image for Controlling AI Sprawl in a Startup Environment
Mikail Kocak
Mikail Kocak

Posted on • Edited on

Controlling AI Sprawl in a Startup Environment

You probably felt it: new AI tools are constantly popping up, engineers experiment with running multiple agents concurrently and marketing team is using other tools.

It can feel overwhelming, it can even feel impossible. Especially in a startup or small business: limited budget and resources. Meanwhile, the AI space is evolving fast while security teams can be left in the dust.

You might even think:

"If companies 10x or 50x bigger than mine are struggling, how can my startup even have a chance?"

You are not behind, you are not failing, you aren't alone, and company size doesn't matter.

Worry shows up when we believe we have no control over the outcome. So you should ask yourself: "how do I gain control?". This is an incremental process, you don't need perfection.

In this article, I'll share what I learned on how to tame what I call AI sprawl, and without burning out you or your team, and importantly: without stopping innovation.

So let's see how you can surf the AI wave!

The Challenges

First of all, let's take a look at the key issues startups and small businesses are facing:

  1. Shadow AI - if you know the term "shadow IT" then this will sound familiar: it's the use of unsanctioned AI tools inside the organization, these are AI providers, chatbots, local models, tools, extensions, skills, MCPs, agents, etc.

    The ecosystem is constantly expanding, new tools appear faster than policies can be written and that speed can feel intimidating. Speed itself isn't the enemy but rather the lack of visibility.

  2. Shadow Cloud - Shadow AI rarely exists alone, it's connected with unsanctioned Cloud storage (OneDrive, Proton Drive, Google Drive, …), personal Cloud Computing accounts (AWS, GCP, Azure), SaaS platforms (Notion, Grammarly, etc.), personal VPS or Cloud Compute instances.

    This isn't malicious most of the time, but rather about convenience. It creates blind spots, we do not see nor know what is leaving our environments, credentials are stored in unknown systems, security is never reviewed. It's a black box. The issue isn't experimentation but rather the unchecked experimentation.

  3. The visibility problem - when Shadow AI and Shadow Cloud expand unchecked, the organization starts losing situational awareness. You don't know where the data is flowing to, what or who is processing it, what is connected to what.

    All this uncertainty is very dangerous.

  4. The startup culture (this is not a bug!) - startups are full of builders and enthusiastic people: they experiment, they optimize, they automate. Security losing sleep over innovation is a sign of growth and not failure.
    And what's most important: security must be an enabler, not a blocker.

  5. There are so many tools, so much stuff. It's hard to know what to focus on.

  6. Constrained resources, startups have very limited number of people, time, and budget. That's normal, and that's part of the fun in startups.

  7. Agentic systems are junior employees at machine speed, they make decisions, call API, trigger workflows and modify data very quickly. They are junior employees except they move at machine speed.

Now, the question is: what do we actually do? Controlling AI sprawl isn't about building a fortress overnight but rather about gradually regaining control, empowering your employees, and creating safe and flexible guardrails.

I recommend a four-step approach:

Step 1: Talk.

'Ancient aliens' meme with the caption saying only one word:

The first step can be summed by one word: talk.

Your goal objective is not to restrict employees or to slow down innovation but rather to understand why they are using these tools and how you can create a safe environment together. On top, you might be surprised by the ideas and solutions the teams have.

This is critical for three main reasons:

  1. Trust - you are engaging, you are demonstrating that you are an ally. Without trust employees may attempt to bypass the controls you try to implement which defeats the purpose.
  2. Insights - you will understand what tools people are using, why they use multiple tools, and what workflows are critical. Make sure to document these insights but remember that AI is evolving quickly thus this needs to be a living document.
  3. Champions & Feedback lines - you are indirectly establishing feedback lines, and at the same time you are able to identify potential AI security champions. You need to look for employees who seem passionate about AI safety. These people can become your "Safe AI Use Committee" or your champions that help you build and iterate a robust AI security program.

Some employees can be pretty passionate about AI, you might get surprises!

Step 2: Governance

Once you understand the landscape, you can start building policies. Policies are one of the most powerful tools: they are low-cost and flexible.

Focus on directives rather than implementing technical controls otheriwse it's time consuming, costly, and makes it difficult to experiment. And most importantly: you are reducing risks quickly.

To start, perform a simple risk assessment:

  • What are the key risks?
  • How could they materialize?
  • What's the impact?
  • What directives can reduce the likelihood and severity?

Engage your AI enthusiasts in this step, they often have insights that will make policies more practical and easier to adopt.

Finally, define metrics to track the effectiveness of your policies and monitor risk levels over time.

Step 3: Keep Monitoring & Iterating

Now, once you have the policies, you need to keep monitoring and keep iterating using the metrics you defined.

Remember: policies aren’t "set and forget".

As well, keep your feedback lines open, regularly check-in with team leads, or employees with influence, and your champions.

Again: talk is your greatest tool, this builds trust and ensures employees see you as an enabler, not a blocker.

Step 4: Technical Controls

Once you have a solid and stable foundation, you can start investing in technical controls. These tools can help enforce your policies at scale and further reduces (or even eliminates) the identified risks.

For example, startups may consider:

  • EDR/XDR - for example Wazuh if you are on a tight budget.
  • Purchasing enterprise plans for AI tools such as Claude, they give you a lot of control.
  • Use scanning tools for things like MCPs, AI skills, models, etc. - for example companies like Socket and Snyk are creating amazing (and open-source) tools around AI: use them.
  • Use custom AI configurations, for example the ~/.claude​ folder allows to manage global settings and you can create custom security hooks, or AI skills focused on security, etc.

So, remember: it's a leadership moment. If you approach AI with fear, you will burn out. Instead approach with curiosity.

Your friends are:

  • Talk
  • Governance
  • Your colleagues and champions

And lastly: size doesn't matter. You don't need to be a tech giant to tame AI.

Top comments (0)