By Robert Kirkpatrick | TotalValue Group LLC
In 1963, Ford Motor Company showed up to Le Mans with a pile of money, the best engineers a blank check could attract, and the full weight of the world's largest automaker behind them. They had a purpose-built racing car. A motorsport budget that would make your eyes water. Absolute confidence they were going to win.
They lost. To Ferrari. Again.
So Henry Ford II found Carroll Shelby.
Shelby wasn't richer than Ford. He didn't have a factory or a research department or a hundred engineers on staff. What he had was something simpler: the ability to look at an engine Ford already owned and see what it could do if everything around it was engineered correctly. He took what existed, built the right system around it, and the GT40 swept Le Mans three years running, 1966 through 1968.
That story has been bouncing around in my head for months. It's the only accurate way I know to describe what's happening with AI right now.
Most companies are Ford circa 1963. They have the budget. The infrastructure. The compute. The vendor contracts. They have everything except the Shelby part. And it's costing them.
The Numbers That Should Embarrass Every Boardroom in America
I'm a data analyst. I don't make arguments on feeling. So here's what the actual research says about what companies are getting for the hundreds of billions they're pouring into AI.
MIT published a report tracking enterprise generative AI projects and found a ninety-five percent failure rate. Not "didn't quite hit targets." Not "fell short of hopes." No measurable financial return within six months. Ninety-five percent.
S&P Global found that forty-two percent of companies scrapped most of their AI initiatives in 2025. The year before, that number was seventeen percent. The scrapping rate more than doubled in a single year, even as the budget kept climbing.
IDC says eighty-eight percent of AI proof-of-concepts fail to make it to production. Large enterprises lost an average of seven-point-two million dollars per failed initiative. The average company abandoned two-point-three of them in 2025 alone.
Meanwhile, Gartner projects that enterprise spending on AI application software will nearly triple to two hundred seventy billion dollars in 2026. The spending is accelerating. The failure rate isn't moving.
Here's the part that gets me. McKinsey dug through the data and found that only six percent of organizations qualify as "high performers," meaning they're actually capturing significant value from AI. Six percent. The other ninety-four percent are burning capital while watching case studies about the six percent.
If a traditional business had a ninety-four percent failure rate, we'd call it a crisis. In AI, we call it "an exciting time of transformation."
Small Businesses Got a Different Kind of Problem
Big companies are throwing money at AI and watching most of it disappear. Small businesses are doing something arguably worse. They're standing on the sidelines completely.
A survey found that eighty-two percent of businesses with fewer than five employees say AI is "not applicable" to their business. Not "too expensive." Not "too complicated." They literally can't see the opportunity sitting in front of them.
The businesses that are using AI aren't exactly crushing it either. Sixty-eight percent of small businesses now use AI tools regularly, but seventy-seven percent have no formal policies, no measurement framework, no training program in place. They're using tools the way someone uses a wrench they found in a parking lot. It works, technically. But you wouldn't build a company on that.
Here's where it gets interesting. Among small businesses that actually adopt AI with some intention behind it, eighty-seven percent report a positive business impact. Eighty-seven percent.
That gap is wide enough to drive a GT40 through. Most businesses either can't see AI's potential, or they're using it backwards. The ones who get it right almost always come out ahead. The variable isn't the AI. It's the system around it.
What the Six Percent Know That the Rest Don't
McKinsey published something worth reading carefully. Companies that achieved significant returns from AI were twice as likely to have redesigned their workflows before selecting an AI model. Before. That matters.
They didn't buy an AI tool and then scramble to figure out how to use it. They looked at their actual work, mapped out where the friction was, identified the specific task they needed AI to handle, then found the right tool for the job.
BCG found the same pattern from a different angle. Successful AI transformations put seventy percent of their effort into upskilling people, updating processes, and shifting culture. The technology itself was the last thirty percent. Most companies invert that completely. Ninety percent on the technology, then they wonder why nothing changes.
This isn't surprising if you've spent any time working with AI. The tool doesn't know your business. It doesn't know what "good" looks like for your specific output. It doesn't know what you've already tried, what your customers care about, what tone your brand should carry. You have to encode all of that into the system before the AI becomes useful.
That encoding is the work. Most people skip it.
The Shelby Principle (What Ford Was Missing)
Carroll Shelby looked at the Ford GT40 and saw the same thing a good data analyst sees when a client hands them a mess of data and says "make sense of this."
The raw material is fine. Nobody built the right structure around it.
Shelby didn't invent a new engine. He engineered the suspension, the aerodynamics, the driver feedback systems. He built pit strategy, refined team communication, obsessed over weight distribution. He built the intelligence layer that turned an expensive piece of machinery into something that could actually win.
That's what I do at TotalValue. Less glamorous context. AI and prompt systems instead of racecars. Same principle.
Every product I've built answers the same question: what does the intelligence layer look like for this specific problem? Not "how do I get ChatGPT to do a thing." What system do I need so that AI produces the right output, every time, for this particular use case?
The difference matters. Asking ChatGPT to write you something and running it through a system built to analyze your writing for AI signature patterns, pacing problems, structural gaps, character voice distinctiveness... those are two completely different activities. One is typing into a box. The other is infrastructure.
The Bulletproof Writer system I built does that second thing. It runs your manuscript or content through eight analytical engines, scores it on a Success Probability formula, and surfaces the specific things that will make AI-generated or AI-assisted writing fall flat with readers. It doesn't replace your voice. It tells you where your voice disappeared and what to do about it. That's the Shelby principle applied to content.
What $39 Actually Gets You (And Why It Matters in 2026)
I want to be honest about what I'm selling and what I'm not.
I don't have Ford's budget. TotalValue is a small operation. I'm one person with a data background and a fairly aggressive bias toward building things that actually work rather than things that look impressive in a pitch deck.
What I've built is the system layer. Pre-engineered AI workflows that wrap around the tools you're already using and give them a job to do. Specific rules. Specific output standards. Specific checks built in.
Here's the contrast I keep coming back to: companies are spending an average of four-point-two million dollars on AI initiatives that get abandoned before they produce a dollar of value. A seven-million-dollar initiative that returns less than two million is considered a win in some boardrooms.
I built ten products that work. Each one targets a specific use case. Each one runs on AI infrastructure that costs me roughly nothing per month. I priced each one at thirty-nine dollars.
That's a data point, not a sales pitch. The difference between a seven-million-dollar failure and a thirty-nine-dollar system that works isn't compute power or model capability. It's whether someone built the intelligence layer correctly.
If you want to see what the system layer looks like before spending anything, the free AI diagnostic tool at the link below runs a quick analysis of where your current AI setup is breaking down. Takes five minutes. Doesn't ask for a credit card.
The Real Bubble (And When It Pops)
The AI bubble conversation has been going on for two years now. People keep asking whether AI is overhyped, whether the spending will crash, whether the technology can actually deliver what it promises.
Wrong question.
AI works. The data on that is clear. Eighty-seven percent positive impact when small businesses do it right. Three-point-seven times ROI when organizations apply it to the correct workflows. Companies that redesign their processes before touching the technology succeed at double the rate of those who don't. The technology is fine.
The bubble isn't in AI. The bubble is in the assumption that buying access to AI is the same as knowing what to build with it.
Ford had everything Shelby had. Except Shelby. They had the money, the machinery, the engineers. What they were missing was someone who could look at what already existed and engineer the right system around it.
Ninety-four percent of companies are still in that position. Capable tools. Expensive infrastructure. No Shelby.
The companies that survive 2026 won't be the ones who spent the most. They'll be the ones who figured out that the intelligence layer, the system that tells the AI what to do and how to do it and what good looks like, is the actual product.
Carroll Shelby didn't have Ford's budget. He had something Ford couldn't buy: he knew exactly what to build and why.
That's the only AI strategy that works right now.
What to Do With This
If you're a small business owner who hasn't figured out where AI fits in your actual workflow, start with the free diagnostic. It asks a few questions about what you're doing and what you need, then gives you a direction rather than a list of tools to try.
If you're already using AI and finding that your output sounds robotic, your content isn't performing, or you keep getting the same generic responses no matter how you phrase the question, the system layer is what's missing. That's what TotalValue products are built to add.
The store is at totalvalue.com/products. Free diagnostic is at the link below. The website has more context on what each system does.
None of this requires a large budget or a technical background. That's the whole point. The Shelby principle works at any scale. You just have to be willing to build the system before you expect results.
Links:
- Free AI Diagnostic: totalvalue.com/products
- TotalValue Store: totalvalue.com/products
- Website: totalvalue.com
Robert Kirkpatrick is the founder of TotalValue Group LLC and builds AI prompt systems that replace work you'd normally pay a consultant to do. He's a data analyst by trade who got tired of watching people fight AI tools that were designed to help them.
Top comments (0)