Jamie Thompson, Founder & CEO, Sprinklenet AI
I've had the same conversation six times in the last two months. Different companies, different verticals, different revenue levels. But the same question every time.
"Do we bolt AI onto what we have, or do we start over?"
If you run a SaaS company with real customers and real revenue, this is the most consequential decision you'll make in 2026. Get it right and you accelerate. Get it wrong and you spend 18 months building something your customers don't want, while a competitor eats your lunch.
Here's how I think about it.
The Case for Embedding
You have something most startups would kill for: an installed base. Customers who pay you money every month. Relationships. Data. Workflows that people depend on.
That is not nothing. That is a beachhead.
The smart version of the embed strategy looks like this. You decompose your platform into its core functional areas. Each one gets its own AI layer, purpose built for that domain. Then you put a master agent on top that orchestrates across all of them. Your customers get AI capabilities inside the product they already use, without switching costs, without migration pain, without retraining their teams.
This is the strategy that preserves revenue. It respects the fact that your customers chose you for a reason. And it lets you move incrementally, shipping value every sprint instead of disappearing into a cave for a year.
I've seen this work well when the underlying architecture is reasonably modern, when there's a clear API layer, and when the team has the discipline to treat each AI integration as a product decision, not a science experiment.
The Case for Going Native
Now here's the other side.
OpenAI, Anthropic, Google, xAI, Meta. These companies have billions of dollars, tens of thousands of engineers, and they are building platforms that overlap with yours. Every single one of them is expanding into adjacent capabilities. The pace of innovation is unlike anything I've seen in 25 years of building software.
If your product was built in 2015 on a monolithic architecture with years of technical debt baked in, embedding AI into it is like putting a turbocharger on a car with a cracked engine block. You can do it. It will even go faster for a while. But the foundation won't hold.
Sometimes you need to burn the past.
Going AI native means designing from the ground up around what foundation models can do today and what they'll be able to do in six months. It means building your product as an orchestration layer, not a feature set. It means accepting that the model will handle 80% of what your engineers used to build manually, and your job is to own the 20% that makes you irreplaceable.
This path is faster if you have the courage to take it. But it requires abandoning code, processes, and sometimes people that got you to where you are.
The Real Answer
Most companies will do a hybrid. That's fine. In fact, it's probably the right call for the majority.
But the companies that win will be the ones who are ruthlessly clear with themselves about what's working and what isn't.
Here's what I mean. You'll start embedding AI into your existing platform. Some of those integrations will land beautifully. Customers will love them. Usage will spike. Revenue will follow. Keep those. Double down.
Other integrations will feel forced. They'll require so much scaffolding and workaround code that your engineers spend more time fighting the legacy architecture than building AI features. Those are the ones you need to cut.
And this is where most companies fail. They hang on too long. They keep pouring resources into the embed strategy for a module that should have been rebuilt from scratch three months ago. They do it out of loyalty to the team that built it, or out of fear of writing off the sunk cost, or because the VP who owns that module has a loud voice in the leadership meeting.
Stop it.
The hybrid strategy only works if you're willing to be brutal about which parts get the embed treatment and which parts get rebuilt natively. If you try to embed everywhere, you'll move too slowly. If you try to go native everywhere, you'll break too much. The skill is in making the cut correctly and making it fast.
What Actually Matters
Here's the thing nobody wants to hear. The window for differentiation is closing.
The foundation models are getting better every quarter. Capabilities that were your competitive advantage six months ago are now available through an API call. The models will keep improving. You cannot build a moat on the AI itself.
So what do you build a moat on?
Three things.
Domain expertise. You know your customer's workflow better than OpenAI does. You know the edge cases, the regulatory requirements, the integration points with their other systems. That knowledge is your moat, but only if you encode it into your product fast enough.
Proprietary data. If your platform generates or captures data that nobody else has, that's gold. But only if you're using it to fine tune, to build retrieval systems, to create feedback loops that make your AI better than the generic version. If you're sitting on data and not weaponizing it, someone else will find a way to replicate it.
Integration depth. The company that is wired deepest into a customer's operations is the hardest to rip out. Every API connection, every workflow automation, every data pipeline is a thread that binds you to your customer. Build more threads.
Speed is the meta strategy. Whoever moves fastest to deliver real, tangible, unique value on top of the major AI models wins. Not whoever has the best pitch deck. Not whoever raises the most money. Whoever ships.
One More Thing
This is exactly the kind of strategic question I help companies answer. At Sprinklenet, I serve as a fractional Chief AI Officer for companies navigating these decisions. Not with a 200 page consulting report that sits on a shelf, but by working alongside your leadership team to make these calls in real time, with skin in the game.
If you're a CEO wrestling with this question, I'm always happy to compare notes.
Jamie Thompson is the Founder and CEO of Sprinklenet AI, where he builds enterprise AI platforms for government and commercial clients. He writes weekly at newsletter.sprinklenet.com.
Top comments (0)