DEV Community

guanjiawei
guanjiawei

Posted on • Originally published at guanjiawei.ai

AI Transformation Doesn't Come from Training

Lately, when AI agents come up in conversation with friends, I've fallen into a habit. I pull out my phone, remote into my computer, and show them the agents I've had running over the past 24 hours. One has been autonomously chasing a goal for over ten hours straight. Another is running experiments and tuning parameters.

Their reactions are pretty much always the same: "Oh, so it's already at this stage. That's not what I pictured."

What they say next is the interesting part.

1. "Help Me Explain This to My Boss"

After watching, the first thing friends say usually goes like this:

"Can you come explain this to our boss?"

"I want to bring our tech lead over to see this."

"Can you give us a training session?"

Fair enough. You think this is important, and you want to bring in the people who need to see it. Good instinct.

But looking back at how AI agents actually spread through our own company, real change never came from a single class or presentation.

It started with someone who just did it themselves. They created something in their own work that made people around them do a double take.

A salesperson who suddenly talks like an engineer, while closing deals faster than ever. An admin or HR person who turns out to be doing technical work and marketing, shipping product-grade work from a role that never used to do that. People around them start to wonder. Why don't you seem like the same salesperson, the same admin anymore?

At that point, the curious ones show up naturally. Colleagues, bosses, friends. The change is happening right beside them, they can see it, and only then do they actually absorb what you're saying. Then it spreads from you to the next colleague, and the next, and out from there.

To be honest, trying to drive change by "getting the boss to sit through a lesson" rarely works. Unless that boss personally got their hands dirty on day one. Because right now, knowing what AI can and can't do comes entirely from bumping up against its boundaries yourself, not from hearing about them.

The data backs this up. A BCG report from early 2025 said 75% of executives rank AI as a top-three priority, but only a quarter have actually captured significant value. McKinsey put it more bluntly: 70% of employees skip their company's formal AI training videos entirely, learning instead by tinkering and word of mouth.

Training can only convey so much. What's scarier is that someone who hasn't deeply used AI themselves, if they go on to set policy, easily falls into one of two extremes. Either they fantasize that AI can do anything, piling on unrealistic KPIs that make their team's life miserable while they think it's all simple. Or they dismiss it entirely—"another bubble, here we go again"—and miss the real window.

So the first misconception, and I think the biggest: don't start by trying to change others. Start with yourself.

2. A PhD-Level AI Writing Weekly Reports

The second thing that really strikes me as a shame.

A lot of top companies give their employees excellent AI infrastructure. The best models, unlimited usage, loose policies. But most people, once they get access, instinctively reach for the most routine tasks. Meeting summaries. Reports. Weekly and monthly updates. And then they stop.

I'm not saying those tasks don't matter; AI really is useful for them. But stopping there is a waste.

If you look deeper along the company's value chain, at the most painful links, whether that's marketing, sales, the product itself, or R&D, couldn't AI do something there too? You don't have to be an expert in that domain, but your industry understanding plus AI's execution ability could let you build something at those nodes.

Think about it. A PhD-level AI told to write weekly reports will dutifully write weekly reports. It does what you assign. But tell it to research cutting-edge math, biology, or medicine, to run experiments and work through deductions, and it does that well too. One's a clerk, the other's a scientist. The gap is massive.

Worklytics data says that within an organization, truly deep AI power users probably account for only 20–30%. The rest hold the exact same tools and use them only for the shallowest tasks. A BCG report from October 2025 also noted that 74% of enterprises get stuck when trying to expand AI adoption. It's not that the tools don't work. It's that the users only used one corner of them.

3. Long-Term Without Short-Term Is Unsustainable

This one is harder to spot than the first two.

After using AI for a while, a lot of people go through an emotional arc. At first they're amazed: "This is so powerful." Then they gradually shift to: "What exactly should I do?" The directions seem plentiful, all viable, but deciding specifically what to do and how to keep going is actually the hardest part.

I've fallen into this trap myself.

AI agents can do remarkable things, but they don't grant wishes. For some bigger directions, agents still burn through massive amounts of tokens and take forever. They need round after round of experimentation to explore and tune before they might yield results. They might not yield anything at all. You're at the boundary of knowledge, and probing forward was never easy. If you bet everything on projects like that, it's easy for your enthusiasm to fizzle out. You work for ages without seeing results, and when people ask what you're doing, you can't really explain it.

So you need a mix.

Short-term things with fast positive feedback. My shortest feedback loop comes from working on my digital identity. Optimizing my website for SEO and having people find me through search. Writing blog posts and having readers get something out of them and want to share and engage. In between, I do small AI projects for friends. Helping a friend with a crawfish business. Making games for people. All of them show results quickly.

Mid-term, you need products that accumulate. The AIMA system, for example. When I show it to potential partners, some are willing to install it and promote it. That's a sturdier kind of positive feedback than "I ran an experiment."

And those deep, long-term explorations in the trenches keep running quietly in the background.

Kotter's eight-step change model has a step called "Generate Short-Term Wins." Same idea. Short-term results sustain confidence, giving you the nerve to keep chewing on hard problems. If the process also brings in some revenue to cover the token costs, the positive loop gets even stronger.

4. Prompt Engineering Is Yesterday's News

The last one, and I think a lot of people are still stuck here.

When people talk about using AI, they still fixate on prompts, thinking they need to master prompt engineering.

That was fine two years ago. Not anymore.

Give today's models a goal and a couple of sentences, and they'll go execute complex tasks. Prompts stopped being the bottleneck a while ago.

The bottleneck is harness. How to build an environment where the agent can actually get work done.

What you need to think about has changed. How do you design the document structure of the working directory? How do you give it machines for experiments? When do you check if it's gone off track? When should you have it pivot direction or change methods? How do you do periodic summaries and archiving?

In early 2025 Karpathy coined the term "vibe coding," casually using natural language to have AI write code, very freeform. A year later, looking back, he said the industry had moved from vibe coding to "agentic engineering," with value shifting up from syntax and implementation to judgment, taste, and management capability. Shopify's Tobi Lutke offered another term, "context engineering." It's not about how to write a good prompt, but about how to fill the agent's context window with the right information.

At the end of the day, AI is a digital employee. When you work with an employee, you don't think the most important thing is crafting their first email, right? That email is a tiny piece. What you really need to figure out is how to set up a proper work environment and guidance that leverages your sense of direction and their execution power, while steering clear of the mistakes they're prone to make.

Shift your thinking from "how to write one good sentence" to "how to manage a digital employee," and collaborating with AI feels completely different.


Looking back, these four points are really one thing.

Start doing it yourself. Don't wait for others. Once you do, don't stay in the comfort zone. Look deeper along the value chain. Set your own rhythm so short-term feedback never dries up. And shift your attention from prompts to environment and collaboration.

The change you create doesn't need pushing. It spreads on its own.


References

  1. BCG, From Potential to Profit: Closing the AI Impact Gap, January 2025.
  2. McKinsey, Superagency in the Workplace: Empowering People to Unlock AI's Full Potential at Work, 2025.
  3. Tobi Lutke (Shopify CEO), Internal Memo on AI Usage Expectations, April 2025.
  4. Andrej Karpathy, Sequoia AI Ascent 2026: From Vibe Coding to Agentic Engineering, April–May 2026.
  5. Tobi Lutke & Andrej Karpathy on "Context Engineering," 2025.
  6. BCG, The Widening AI Value Gap, October 2025.
  7. Worklytics, AI Adoption Benchmarks 2025, Q3 2025.
  8. McKinsey, The State of AI in 2025, March 2025.
  9. John P. Kotter, Leading Change: Generate Short-Term Wins.

原文链接:https://guanjiawei.ai/en/blog/ai-transformation-not-from-training

Top comments (0)