<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: guanjiawei</title>
    <description>The latest articles on DEV Community by guanjiawei (@skyguan92).</description>
    <link>https://dev.to/skyguan92</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/skyguan92"/>
    <language>en</language>
    <item>
      <title>The DeepSeek Moment for the Open Source Community</title>
      <dc:creator>guanjiawei</dc:creator>
      <pubDate>Tue, 07 Apr 2026 02:53:13 +0000</pubDate>
      <link>https://dev.to/skyguan92/the-deepseek-moment-for-the-open-source-community-2je2</link>
      <guid>https://dev.to/skyguan92/the-deepseek-moment-for-the-open-source-community-2je2</guid>
      <description>&lt;p&gt;A few days ago, while researching text-to-video models, I discovered something that differed significantly from my original understanding: in the directions of text-to-image and text-to-video, Chinese companies' presence in the open source community is far weaker than in language models.&lt;/p&gt;

&lt;p&gt;I had always assumed Chinese models dominated across all fronts. But upon closer inspection, that's not the case. This led me to review the changes in the open source community over the past three years.&lt;/p&gt;

&lt;h2&gt;
  
  
  Model Open Source Is Not Software Open Source
&lt;/h2&gt;

&lt;p&gt;First, let's discuss something many people may not have considered.&lt;/p&gt;

&lt;p&gt;With open source software, once the source code is released, there are no secrets left—you can rebuild an identical copy from scratch. Models are different. When models are open sourced, what you typically get are the weights and inference scripts. The core elements—training methods, training data, engineering details—usually remain undisclosed.&lt;/p&gt;

&lt;p&gt;What can you do with the weights? Deploy them for inference, fine-tune them. Want to reproduce from scratch? Nearly impossible. So "open source" for models and "open source" for software were never the same thing from the start.&lt;/p&gt;

&lt;p&gt;This leads to a question that was debated extensively in the early days: since model open source differs from software open source, what is the point of open sourcing models?&lt;/p&gt;

&lt;p&gt;The greatest benefit of open source software is community collaboration—developers worldwide fixing bugs and adding features together. But after open sourcing a model, very few individuals or institutions actually have the capability to participate in model development. You need massive compute, data, and training infrastructure. The vast majority of people can only run inference with the weights.&lt;/p&gt;

&lt;p&gt;At the time, Robin Li said open source large models made no sense, and for a while I thought he had a point. If the primary driver of community collaboration doesn't apply, what's the point of open sourcing?&lt;/p&gt;

&lt;p&gt;Subsequent events answered this question.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Era of LLaMA
&lt;/h2&gt;

&lt;p&gt;When ChatGPT emerged at the end of 2022, large models entered the public consciousness. Before this, the open source world was rather monotonous. OpenAI's GPT-2 was fully open sourced in 2019, but by GPT-3 it had shifted to an application-based API access model. I remember wanting to use GPT-3 at a hackathon and discovering I had to email to request API access. Essentially, it was already closed source.&lt;/p&gt;

&lt;p&gt;In 2023, the open source community was basically dominated by Meta's LLaMA. LLaMA 1 came in February 2023, LLaMA 2 in July. Every time LLaMA released a new version, a batch of Chinese models would follow up with upgrade announcements—this was the so-called "Hundred Model War" (Bai Mo Da Zhan), with the pace dictated by LLaMA.&lt;/p&gt;

&lt;p&gt;Chinese models open sourced at this stage were, frankly speaking, for marketing purposes. The models released were relatively small. Zhipu's GLM-6B was the earliest representative; many people's first exposure to private deployment of large models started with it. I remember a friend choosing a model at the time, and I wondered why he picked a Chinese model. He said it was made by Tsinghua and had a good reputation. Baichuan open sourced a 14B model. 01.AI (Zero One Everything), founded by Kai-Fu Lee, open sourced Yi-34B in November 2023, which was considered relatively large among Chinese open source models at the time. Shanghai AI Laboratory also continued working on the InternLM (Shusheng) series.&lt;/p&gt;

&lt;p&gt;Everyone's strategy was the same: open source small models for promotion, keep large models closed for commercialization.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Entry of Qwen
&lt;/h2&gt;

&lt;p&gt;In 2024, this balance was broken by Qwen.&lt;/p&gt;

&lt;p&gt;Starting mid-2024, Alibaba's Qwen made intensive efforts, releasing models from several billion to 72B parameters, all performing well and fully open sourced. Previously, everyone assumed large models wouldn't be open sourced; suddenly someone released a highly effective large model openly.&lt;/p&gt;

&lt;p&gt;Although LLaMA had significant international influence, its Chinese capabilities were always lacking, requiring secondary training for practical use. Qwen could be used almost out-of-the-box for Chinese scenarios, quickly replacing LLaMA's position in the Chinese open source community.&lt;/p&gt;

&lt;p&gt;By the end of 2024, the Qwen series had become the de facto standard for Chinese open source models. Closed source models felt pressure from open source for the first time.&lt;/p&gt;

&lt;h2&gt;
  
  
  DeepSeek Flipped the Table
&lt;/h2&gt;

&lt;p&gt;Qwen did the best within the existing rules. DeepSeek changed the rules entirely.&lt;/p&gt;

&lt;p&gt;DeepSeek entered the scene in 2024 with a simple approach: open source upon release, with extremely thorough technical reports. At the end of 2024, V3 was released—hundreds of billions of parameters, excellent performance, open sourced immediately upon release. At that time, few had seen models of this scale released openly.&lt;/p&gt;

&lt;p&gt;But what really caused an explosion was R1 in January 2025.&lt;/p&gt;

&lt;p&gt;OpenAI had just launched the O1 reasoning model in September 2024, and DeepSeek's R1 came out before the Spring Festival (Chinese New Year). Its reasoning capabilities were very close to the top closed source models at the time—not quite on par, but the gap was surprisingly small. And it was fully open sourced on day one.&lt;/p&gt;

&lt;p&gt;Previously, everyone maintained an order of "small open source, large closed source." What DeepSeek open sourced was better than many companies' best closed source models, and this order collapsed immediately.&lt;/p&gt;

&lt;p&gt;LLaMA 4 is another footnote. Meta spent a long time training a massive model to regain its position in the open source community, releasing it in April 2025, but it flopped. Performance fell far short of expectations, and cheating on benchmarks was exposed. Later, Yann LeCun himself admitted that "results were fudged," and Zuckerberg lost confidence in the entire GenAI team. LLaMA 4 is basically unused, and the LLaMA series' position in the open source community ended there.&lt;/p&gt;

&lt;h2&gt;
  
  
  Day-0 Open Source Became the Industry Standard
&lt;/h2&gt;

&lt;p&gt;After DeepSeek, Chinese model companies turned to day-0 open source one after another—their best models open sourced on the day of release.&lt;/p&gt;

&lt;p&gt;Kimi open sourced K2 in July 2025, a trillion-parameter MoE model. MiniMax open sourced M2.5. Zhipu continued iterating the GLM series. Wave after wave, the quality of open source models kept rising.&lt;/p&gt;

&lt;p&gt;Today, if you look at the international open source community for language models, the leaderboards are almost entirely Chinese models. Qwen, DeepSeek, GLM, MiniMax, Kimi—the presence of overseas models has become very weak.&lt;/p&gt;

&lt;p&gt;What DeepSeek did wasn't just contribute a model. It changed how the entire industry plays its hand.&lt;/p&gt;

&lt;h2&gt;
  
  
  But the Winds Are Shifting Again
&lt;/h2&gt;

&lt;p&gt;However, the fervor for this wave of day-0 open source is cooling down.&lt;/p&gt;

&lt;p&gt;DeepSeek's last open source release was V3.2 in December 2025—more than four months ago. V4 has been rumored for a long time but hasn't appeared. During this window, everyone's strategies have started to loosen.&lt;/p&gt;

&lt;p&gt;Qwen 3.6 Plus was released at the end of March 2026, not open sourced, API-only. This is the first time a flagship Qwen model wasn't open sourced. Zhipu's GLM-5.1 was also released closed source first, although they just announced they would open source the weights in the past couple of days. Many companies' latest multimodal models are no longer open sourced either.&lt;/p&gt;

&lt;p&gt;It seems we've returned to that original question: what's the point of open source? When competitive pressure decreases, the answer may change again.&lt;/p&gt;

&lt;h2&gt;
  
  
  Text-to-Image and Text-to-Video Are Still Waiting
&lt;/h2&gt;

&lt;p&gt;Returning to that initial surprising discovery.&lt;/p&gt;

&lt;p&gt;The text-to-image open source community is still dominated by overseas models. The most widely used are Stability AI's Stable Diffusion series and Black Forest Labs' FLUX series. Chinese models have made some progress—Qwen released Qwen-Image, Tencent has Hunyuan Image 3.0, Zhipu has GLM-Image. But compared to the situation with language models, the difference is vast.&lt;/p&gt;

&lt;p&gt;Text-to-video is the same. Alibaba's last open source text-to-video model was Wan 2.2 in July 2025—almost 9 months ago with nothing new since. The recently popular open source text-to-video model is LTX-2, from Israeli company Lightricks, which open sourced weights in January 2026.&lt;/p&gt;

&lt;p&gt;This is a completely different world from language models. On the language model side, Chinese models have filled the entire open source community; on the text-to-image and text-to-video side, it still looks more like 2023: overseas models dominate, Chinese models appear sporadically.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Are We Waiting For
&lt;/h2&gt;

&lt;p&gt;Everyone is waiting for DeepSeek V4.&lt;/p&gt;

&lt;p&gt;But we're waiting for more than just a model. DeepSeek previously proved something: a sufficiently strong model fully open sourced on the day of release can change the strategic direction of an entire industry. This happened with language models; it hasn't happened yet with text-to-image and text-to-video.&lt;/p&gt;

&lt;p&gt;I sometimes half-jokingly think that maybe DeepSeek just needs to make another move. But then again, DeepSeek itself hasn't released a new model in over four months either.&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;Originally published at &lt;a href="https://guanjiawei.ai/en/blog/open-source-deepseek-moment" rel="noopener noreferrer"&gt;https://guanjiawei.ai/en/blog/open-source-deepseek-moment&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>models</category>
      <category>reflection</category>
    </item>
    <item>
      <title>Not One Revolution, But Two</title>
      <dc:creator>guanjiawei</dc:creator>
      <pubDate>Fri, 03 Apr 2026 03:45:49 +0000</pubDate>
      <link>https://dev.to/skyguan92/not-one-revolution-but-two-9l9</link>
      <guid>https://dev.to/skyguan92/not-one-revolution-but-two-9l9</guid>
      <description>&lt;p&gt;Recently, chatting with a few friends, plus spending ten-plus hours daily immersed in coding agents myself, one feeling has become increasingly clear: this wave of AI isn't one revolution—it's two.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Programming Track Has Been Discussed to Death
&lt;/h2&gt;

&lt;p&gt;The first track is programming and productivity. Coding agents, AI office assistants—the entire workflow of white-collar workers in cognitive processing and software creation is being reshaped. There's been plenty of discussion on this track, and I've written about it before, so I won't expand on it here.&lt;/p&gt;

&lt;p&gt;Some numbers to give you a sense of scale: GitHub's statistics say AI has written 26.9% of production code, developers using AI save an average of 3.6 hours per week, and merged PRs are up 60%. But there are dissenting voices—METR's randomized controlled trial found that core contributors to 16 large open-source projects actually slowed down by 19% after using AI tools, though they &lt;em&gt;felt&lt;/em&gt; 20% faster. Faced with complex, large codebases, AI hasn't yet reached the point where using it mindlessly is always better.&lt;/p&gt;

&lt;p&gt;My own sense is that when you know what you want, coding agents can indeed boost efficiency by an order of magnitude. The key phrase is "knowing what you want."&lt;/p&gt;

&lt;h2&gt;
  
  
  The Other Track, Probably Underestimated
&lt;/h2&gt;

&lt;p&gt;The second track is content creation. Text-to-image, text-to-video, image editing—what's happening in this direction is no less intense than the programming side.&lt;/p&gt;

&lt;p&gt;ByteDance's Seedance 2.0 is a watershed moment.&lt;/p&gt;

&lt;p&gt;Released in February, API opened at the end of March. Artificial Analysis benchmarked it at Elo 1269, surpassing Google Veo 3, Sora 2, and Runway Gen-4.5. Not just a little ahead—it's in a different league. It can generate up to 20 seconds of 1080p video in one go, with music, dialogue, and sound effects synchronized, no post-production dubbing needed. Camera movement, lighting, character actions—all precisely controllable.&lt;/p&gt;

&lt;p&gt;A friend of mine working on AI short dramas said there are tons of teams in the industry sitting on tens of millions in RMB cash waiting to use this. The day the API opened, my social media feed exploded with entrepreneurs in related fields—short films made with it flooded the screen, and the results were completely different from before.&lt;/p&gt;

&lt;p&gt;The cost changes are staggering. Previously, producing a 25-minute episode of Japanese animation cost roughly 1 to 3.2 million RMB ($140k–$450k USD). &lt;em&gt;Attack on Titan&lt;/em&gt; was about 1.1 million per episode, &lt;em&gt;Jujutsu Kaisen&lt;/em&gt; about the same. Now a 3-minute AI short costs 400 to 1,200 RMB ($55–$170 USD). Per-minute costs have dropped to a few percent of traditional methods. In blind tests, 73% of viewers couldn't tell the difference.&lt;/p&gt;

&lt;p&gt;I also have friends running small live commerce teams who tell me content production costs have dropped to one-tenth of before, and speed has increased tenfold. Previously a content person cost over 10,000 RMB monthly; now the same output costs an order of magnitude less. Completely supply-constrained.&lt;/p&gt;

&lt;p&gt;Imagine someone spending a few thousand RMB and one week to make a 20-minute anime short. Previously impossible. Now the outline is clear—text-to-video has reached its "ChatGPT moment." Once the path is found, cost reduction is only a matter of time. That's AI's rhythm: someone blazes a trail, then cheaper solutions inevitably follow, because it's fundamentally software, endlessly iterable through data and training.&lt;/p&gt;

&lt;h2&gt;
  
  
  Two Paths Diverging
&lt;/h2&gt;

&lt;p&gt;I have a friend whom I recommended coding agents to. He said he hasn't had the energy to look into them lately—not because he isn't smart, but because his time and passion are entirely in content creation, figuring out how to use the latest tools to build pipelines, how to express creativity at low cost. I thought about it—why should he research coding agents? That's not his direction.&lt;/p&gt;

&lt;p&gt;The reverse is also true. I spend ten-plus hours daily in coding agents; asking me to research content creation pipelines, I couldn't reach that level in a short time.&lt;/p&gt;

&lt;p&gt;In my &lt;a href="https://dev.to/blog/ai-amplifies-passion"&gt;previous post&lt;/a&gt;, I wrote about the divergence between Builders and Promoters. This time the feeling is more concrete: these two paths differ not just in roles, but in tools, skills, and passion required. There's overlap, but it's shrinking, while the divergence is growing.&lt;/p&gt;

&lt;p&gt;This means it's not just product builders undergoing massive change. People passionate about expression—content creators—will next receive tools of equal magnitude. If you have strong desire to express something, for the cost of a few dozen RMB and half a day, you can make a one-minute short video. Smartphones and Douyin (TikTok) already lowered the barrier to shooting videos once; next, that barrier will drop another order of magnitude.&lt;/p&gt;

&lt;h2&gt;
  
  
  Agent to Agent: Meetings Can Die
&lt;/h2&gt;

&lt;p&gt;Both paths share a common downstream impact: both are changing how people interact with each other.&lt;/p&gt;

&lt;p&gt;A friend asked me: AI is so powerful now, so if I return to face-to-face interaction fields—sales, investor relations, supply chain—is that safer?&lt;/p&gt;

&lt;p&gt;I don't think so necessarily.&lt;/p&gt;

&lt;p&gt;I now think meetings are an extremely inefficient method. I record audio, AI transcribes it and organizes it into a document, with more information than a two-hour meeting. The other party uses an agent to understand this document, extracting the most crucial points in minutes. What I do in five minutes might take two hours of meeting to get the full picture.&lt;/p&gt;

&lt;p&gt;The efficiency difference is 100x. This isn't rhetoric.&lt;/p&gt;

&lt;p&gt;Infrastructure is rapidly taking shape. Google released the Agent-to-Agent protocol (A2A) last year, with over 50 enterprises including Salesforce, SAP, and PayPal pushing it forward. Anthropic's MCP lets agents access various tools and data sources. Early this year, the Linux Foundation established the Agentic AI Foundation, with OpenAI, Anthropic, Google, and Microsoft all joining; by February, over 100 enterprises followed. Gartner predicts that by year-end, 40% of enterprise applications will embed AI agents; last year that number was under 5%.&lt;/p&gt;

&lt;p&gt;Your agent and the other party's agent analyzing, transmitting information, and coordinating tasks in the background will be far more professional than two people chatting face-to-face. Agents don't need small talk, don't need two weeks to build trust. They can synthesize judgments from all verifiable information, more reliable than listening to someone say two sentences and then "feeling like this person is okay."&lt;/p&gt;

&lt;h2&gt;
  
  
  What E-commerce Eliminated
&lt;/h2&gt;

&lt;p&gt;This reminds me of e-commerce.&lt;/p&gt;

&lt;p&gt;Before e-commerce, all transactions had to be face-to-face. To buy something, you had to meet the seller, establish a relationship, judge each other. The seller's social skills and charisma were key factors in closing deals.&lt;/p&gt;

&lt;p&gt;E-commerce arrived. Products are similar, compare prices, look at reviews, order in ten minutes. You don't know the seller, haven't met them, they don't even know you bought. In B2B too—buyers complete 57% to 70% of procurement research before contacting sales, and 67% of the procurement journey happens online.&lt;/p&gt;

&lt;p&gt;E-commerce didn't eliminate retail. US physical retail still accounts for 81.6%. But it restructured the relationship logic in transactions—face-to-face social skills went from "essential" to "nice to have."&lt;/p&gt;

&lt;p&gt;Agent-to-agent will bring a similar but much stronger wave. When the efficiency difference is 100x, in many scenarios people will choose the more efficient method. It's not that people aren't important; it's that business matters now have better channels for handling.&lt;/p&gt;

&lt;h2&gt;
  
  
  Don't Talk Business on the Court
&lt;/h2&gt;

&lt;p&gt;There's a side effect of this that I think is quite good: human relationships will become purer.&lt;/p&gt;

&lt;p&gt;Before, you had to talk business when playing ball, discuss partnerships over meals. Face-to-face was the most efficient business communication method, so social and business were always mixed. You didn't go play ball just because you liked it, but because you could meet people you wanted to know on the court.&lt;/p&gt;

&lt;p&gt;If agents solve the business part? Playing ball is just playing ball. Socializing is just socializing. No need to maintain unwanted relationships in situations you don't enjoy.&lt;/p&gt;

&lt;p&gt;Intimacy, entertainment, interests—these needs will be separated from business scenarios. It's not that they aren't important; it's that they can finally just be themselves.&lt;/p&gt;

&lt;h2&gt;
  
  
  Go Find It
&lt;/h2&gt;

&lt;p&gt;Two revolutions running simultaneously, each requiring massive time to master.&lt;/p&gt;

&lt;p&gt;The pattern I've observed is simple: people with passion soak ten hours daily, and they master every tool. People without passion occasionally open and look, and the gap quickly becomes orders of magnitude. The AI leverage is there; whether you can pry it loose depends on whether you're willing to keep applying force.&lt;/p&gt;

&lt;p&gt;Whether building products or creating content—go find that thing you can't stop doing.&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;Originally published at &lt;a href="https://guanjiawei.ai/en/blog/two-ai-revolutions" rel="noopener noreferrer"&gt;https://guanjiawei.ai/en/blog/two-ai-revolutions&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>ai</category>
      <category>entrepreneurship</category>
      <category>thoughts</category>
    </item>
    <item>
      <title>AI Doesn't Amplify Skills, It Amplifies Passion</title>
      <dc:creator>guanjiawei</dc:creator>
      <pubDate>Thu, 02 Apr 2026 04:41:35 +0000</pubDate>
      <link>https://dev.to/skyguan92/ai-doesnt-amplify-skills-it-amplifies-passion-1j64</link>
      <guid>https://dev.to/skyguan92/ai-doesnt-amplify-skills-it-amplifies-passion-1j64</guid>
      <description>&lt;p&gt;I recently came across the story of Zhang Xue Motorcycles and was deeply moved.&lt;/p&gt;

&lt;p&gt;Zhang Xue builds motorcycles. Born in 1987, he dropped out of middle school at 14 to apprentice as a motorcycle repairman. At 19, he chased a Hunan TV film crew over 100 kilometers of mountain roads in a rainstorm on a broken motorcycle, just to show off his riding skills on camera. In 2013, he arrived in Chongqing with 20,000 RMB, later co-founding Kove Motorcycles and growing annual sales from 800 units to 30,000. In 2024, at age 37, he gave up his equity in Kove to start Zhang Xue Motorcycles, determined to build his own proprietary engines.&lt;/p&gt;

&lt;p&gt;Many people see only the final results: in 2025, Zhang Xue Motorcycles hit 750 million RMB in revenue. This March, they won two consecutive races in the WorldSSP class at the WSBK Portuguese round, finishing 3.685 seconds ahead of second place. It was the first time a Chinese motorcycle brand won at this level, breaking decades of dominance by Ducati, Yamaha, and Kawasaki.&lt;/p&gt;

&lt;p&gt;But what sticks with me isn't these achievements—it's a few details from his interviews.&lt;/p&gt;

&lt;h2&gt;
  
  
  I'll Cover the Eight Hundred Meters
&lt;/h2&gt;

&lt;p&gt;One segment left a deep impression. They were working on improving the machining precision of a core engine component, from "five si" to "three si." One &lt;em&gt;si&lt;/em&gt; is 0.01 millimeters, so going from five to three si means reducing tolerance from 0.05mm to 0.03mm—a 40% precision improvement. This isn't just changing a parameter; it requires upgrading cutting margins, tool control, material consistency, and the entire process chain, all while maintaining consistency in mass production. No one in the domestic supply chain had done this before. Suppliers refused to cooperate, deeming the risks too high.&lt;/p&gt;

&lt;p&gt;Zhang Xue told these suppliers: I'll cover the R&amp;amp;D and trial-and-error costs. You don't need to worry. If we succeed, we share the rewards.&lt;/p&gt;

&lt;p&gt;Think about that scene. He wasn't exactly a major player in the industry chain—just a downstream buyer, probably seen as a small client by these suppliers. But what he said followed no "buyer" logic.&lt;/p&gt;

&lt;p&gt;What was it then? It was that the act of doing this thing itself held meaning greater than profit or risk. He was desperate to see it happen, willing to put himself on the line for it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ten Years Obsessing Over a Pair of Glasses
&lt;/h2&gt;

&lt;p&gt;Zhang Xue isn't the only one.&lt;/p&gt;

&lt;p&gt;Misa Zhu Mingming, founder of Rokid, was still at Alibaba in 2012 when he attended the Google Glass launch. Watching someone skydive while wearing the glasses, he was stunned. He later said only two thoughts ran through his mind: this thing will change the world, and we can do it better.&lt;/p&gt;

&lt;p&gt;He decided to leave Alibaba to start his own company. Jack Ma personally talked with him for four hours, asking why he wanted to leave. He replied: "AI and AR will change human lifestyles, and eventually they'll become the same thing."&lt;/p&gt;

&lt;p&gt;Rokid was founded in 2014. But the AR glasses supply chain was immature at the time, so they had to survive by making smart speakers first—the Rokid Alien and Moonstone. Then Baidu drove smart speaker prices down to 89 RMB, while Rokid sold theirs for 1,399. They couldn't compete with the BAT subsidy war. Misa publicly stated that the AI industry would enter a 12-to-18-month winter.&lt;/p&gt;

&lt;p&gt;Once they stabilized slightly, he immediately plunged back into glasses.&lt;/p&gt;

&lt;p&gt;By 2025, Rokid produced 49-gram AI+AR glasses—the lightest in the world, supporting real-time translation in over 50 languages. This February, when German Chancellor Merz visited Hangzhou, he personally tried on a pair. After experiencing the real-time translation, he said two words: "Accurate, fast." Seven or eight German business executives accompanying him placed orders on the spot.&lt;/p&gt;

&lt;p&gt;The guy who posted on Moments ten years ago saying he wanted to make glasses today had the German Chancellor wearing what he built.&lt;/p&gt;

&lt;p&gt;Misa still posts two or three product-related updates on Moments daily. It's not marketing—he genuinely thinks it's cool and wants people to see it. He once said something I find accurate: "Entrepreneurship is learning 99% of things you don't particularly like for the sake of 1% hobby and dream."&lt;/p&gt;

&lt;h2&gt;
  
  
  Not Businessmen
&lt;/h2&gt;

&lt;p&gt;These people keep me thinking about one question: What's the difference between entrepreneurs and businessmen?&lt;/p&gt;

&lt;p&gt;China has no shortage of people who know how to do business. From ancient times to now, there have always been plenty of people trying to figure out how to make money. Businessmen find optimal solutions within existing frameworks—doing whatever makes money, whatever is most efficient. There's nothing wrong with that; society needs them.&lt;/p&gt;

&lt;p&gt;But that's not what Zhang Xue and Misa are doing.&lt;/p&gt;

&lt;p&gt;I later came across the Austrian School economists and realized they spotted this difference long ago. Mises called entrepreneurs "the driving force of the entire market system," distinguishing between entrepreneurs and managers: managers optimize within given parameters, while entrepreneurs change the parameters themselves. Schumpeter put it more strongly—he said what entrepreneurs do is called "creative destruction," dismantling old structures from within to create new ones. He even believed the primary driver for innovators wasn't profit, but a "will to conquer."&lt;/p&gt;

&lt;p&gt;Looking back, Zhang Xue bearing suppliers' trial-and-error costs wasn't because he calculated the ROI and found it worthwhile—he just had to see three-si precision become reality in a domestic motorcycle engine. Misa spent ten years obsessing over glasses and nearly went under, yet in 2014 when Google Glass had just failed, everyone thought AR was dead.&lt;/p&gt;

&lt;p&gt;These people aren't running in the race; they're building their own track.&lt;/p&gt;

&lt;p&gt;Wang Xingxing of Unitree is the same. In 2016, he worked at DJI for two months before leaving to found Unitree. At the time, quadruped robots were almost all hydraulic—expensive and clumsy. He bet on electric drive and built core components himself. In 2021, they launched the Go1, priced under $2,700, while Boston Dynamics' Spot sold for $75,000. Today Unitree holds over 60% of the global quadruped robot market. Liang Wenfeng went from quantitative finance to building AGI, hiring based on passion and curiosity rather than degrees. He once said: "The phase of following is over. It's time to lead."&lt;/p&gt;

&lt;h2&gt;
  
  
  What Exactly Does AI Amplify
&lt;/h2&gt;

&lt;p&gt;Back to AI.&lt;/p&gt;

&lt;p&gt;I used to think AI's value lay in amplifying skills. Programmers code ten times faster, designers generate images ten times faster—skills multiplied by a coefficient. That sounded reasonable.&lt;/p&gt;

&lt;p&gt;But in the process of pushing this within teams, I realized that's not the case.&lt;/p&gt;

&lt;p&gt;The bottleneck isn't whether AI can help you do something. It's whether you know what to use it for.&lt;/p&gt;

&lt;p&gt;Previously, measuring someone's ability meant "what are you good at"—finishing tasks was the standard. Now the threshold for finishing tasks has been drastically lowered by AI. What becomes scarce then? It's your ability to consistently do something well, to do it differently from others.&lt;/p&gt;

&lt;p&gt;The answer comes from passion.&lt;/p&gt;

&lt;p&gt;I've observed a clear divergence within teams. Some people progress rapidly with AI, not because of strong technical foundations, but because they know what they want. Their direction is clear—they might be orchestrating ten AI agents running different tasks simultaneously, staring at one thing for the long haul. Others have decent skills but just don't know what to do with AI. Their previous work mode was waiting for assigned tasks, completing them, getting feedback. Now that loop is broken—you have to find your own direction, drive yourself. During the process, no one might cheer you on; you have to judge for yourself whether it's worth continuing.&lt;/p&gt;

&lt;p&gt;Previously, this judgment mattered less because organizational structure made decisions for you. Now it's different.&lt;/p&gt;

&lt;h2&gt;
  
  
  Builders and Promoters
&lt;/h2&gt;

&lt;p&gt;Thinking further down this path, I increasingly feel that the corporate form will gradually weaken.&lt;/p&gt;

&lt;p&gt;The future might look more like MCNs. Two core roles: Builders, who focus intensely on building products, turning what they see into reality; and Promoters, who have influence and can make good things seen.&lt;/p&gt;

&lt;p&gt;Many Builders are themselves Promoters. Zhang Xue's interview stories spread naturally; Misa promotes his products on Moments daily; Wang Xingxing's robots performed at the Spring Festival Gala. These creators are themselves distribution nodes.&lt;/p&gt;

&lt;p&gt;Between these two roles, the loop might close. The remaining organizational form might resemble MCNs—providing infrastructure, taking a thin slice of profit, offering some incubation support.&lt;/p&gt;

&lt;h2&gt;
  
  
  Find Your Passion
&lt;/h2&gt;

&lt;p&gt;My mindset is quite different from before. Things I used to value—position in an organization, amount of resources, whether others approve—now feel less critical. Instead, there's a greater sense of freedom.&lt;/p&gt;

&lt;p&gt;What AI gives people, ultimately, is the possibility of landing your passion. Previously, having passion but no team or capital meant many things were simply impossible. Now one person plus AI can accomplish more than a small team could before.&lt;/p&gt;

&lt;p&gt;So the better question isn't "Will AI replace me?" It's "Where exactly is my passion?" Do you want to build something, or do you want good things to be seen by more people?&lt;/p&gt;

&lt;p&gt;When anxious, it's easy to overlook one change: AI is enabling more Zhang Xues and more Misas to emerge from unexpected corners. These people might not have big company backgrounds or many resources—just a direction they absolutely must pursue, then pushing forward with AI.&lt;/p&gt;

&lt;p&gt;I don't know exactly where, but within six months we should see quite a few.&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;Originally published at &lt;a href="https://guanjiawei.ai/en/blog/ai-amplifies-passion" rel="noopener noreferrer"&gt;https://guanjiawei.ai/en/blog/ai-amplifies-passion&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>ai</category>
      <category>创业</category>
      <category>思考</category>
    </item>
    <item>
      <title>Growth Money Can't Buy</title>
      <dc:creator>guanjiawei</dc:creator>
      <pubDate>Mon, 30 Mar 2026 15:28:23 +0000</pubDate>
      <link>https://dev.to/skyguan92/growth-money-cant-buy-13e1</link>
      <guid>https://dev.to/skyguan92/growth-money-cant-buy-13e1</guid>
      <description>&lt;p&gt;Recently, while researching product cold starts, I ended up reading quite a bit about product competition. There were things I thought I understood before, but after reading, I realized I only knew about them—I didn't truly understand them.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Old Formula Is Broken
&lt;/h2&gt;

&lt;p&gt;In the past, the playbook for product competition was simple: whoever had the louder voice and more resources would win. Go big, plaster ads everywhere, and users would naturally come.&lt;/p&gt;

&lt;p&gt;This playbook started to crumble in the AI space around early last year.&lt;/p&gt;

&lt;p&gt;On January 20, 2025, DeepSeek released the R1 model. Almost nobody had heard of this company before—no brand recognition, no marketing budget. They just quietly posted the model to open-source communities and pushed a chat feature to their app.&lt;/p&gt;

&lt;p&gt;Then came the Spring Festival.&lt;/p&gt;

&lt;p&gt;In roughly seven days, DeepSeek reached over 100 million users. It topped the App Store charts in both China and the US, servers were overwhelmed, and they even suggested users try other AI assistants at one point. Yes, the servers couldn't handle the load—they were actually diverting traffic away. But DeepSeek didn't seem to care much; they just kept open-sourcing and updating.&lt;/p&gt;

&lt;p&gt;The most representative comparison at the time was Doubao. Doubao is ByteDance's AI app, with arguably the largest resource investment domestically, and it had long held the top spot for DAU among AI apps. Before it, Kimi had reportedly spent hundreds of millions on marketing for growth, investing in Bilibili, Douyin, and the education market. Then when Doubao made its move, Kimi's buzz dropped significantly.&lt;/p&gt;

&lt;p&gt;Then DeepSeek came along, spent not a single penny, and surpassed Doubao outright.&lt;/p&gt;

&lt;p&gt;You thought those who spent big were safe, but then someone who spent nothing overtook you. That was quite a shock.&lt;/p&gt;

&lt;h2&gt;
  
  
  Blazing-Fast Iteration
&lt;/h2&gt;

&lt;p&gt;Later, I went to check the version numbers for Claude Code and Codex, and I was a bit shocked.&lt;/p&gt;

&lt;p&gt;Claude Code's latest version is 2.1.87. From its first release in February 2025 to now, they've shipped 365 versions total, averaging one every 1.1 days. You open it today, and there's likely something new.&lt;/p&gt;

&lt;p&gt;Codex's latest stable version is 0.117.0. Since launching in April last year, they've released 132 stable versions, plus twenty to thirty alpha iterations per version—basically moving every day.&lt;/p&gt;

&lt;p&gt;This isn't some small utility. Claude Code has over 10 million weekly npm downloads, Codex over 3 million. At this scale, iteration speed is measured in days.&lt;/p&gt;

&lt;p&gt;Horizontal speed is even more absurd.&lt;/p&gt;

&lt;p&gt;Last Thursday (March 27), DingTalk open-sourced workspace-cli, granularizing core features so AI agents could directly manipulate calendars, todos, and messages. The next day, Feishu open-sourced larksuite/cli, covering 11 business scenarios and over 200 commands. Another day later, WeChat Work's wecom-cli appeared on GitHub.&lt;/p&gt;

&lt;p&gt;Three companies, three days, almost identical moves. Faced with a valid direction, follow-up speed is measured in days. How long can a "first-mover advantage" last in this environment? Maybe two to three weeks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Doing Things That Don't Scale
&lt;/h2&gt;

&lt;p&gt;While researching cold starts, I discovered a counterintuitive pattern: many products that later became huge didn't start by spending money, but rather by doing things that simply couldn't be scaled.&lt;/p&gt;

&lt;p&gt;Paul Graham wrote "Do Things That Don't Scale" in 2013, which had a big impact in startup circles. The core idea is: don't think about scaling in the early days, use unscalable methods first.&lt;/p&gt;

&lt;p&gt;Sounds like motivational fluff, but the case studies convince you otherwise.&lt;/p&gt;

&lt;p&gt;Stripe's two founders, Patrick and John Collison—when someone said "I could try your payment product," they wouldn't send a link for you to figure out yourself. Instead, they'd say "Give me your laptop," and integrate Stripe into your code on the spot. Paul Graham thought this move was brilliant and named it the "Collison Installation," teaching it in every Y Combinator batch since. Airbnb did something similar. Early New York listings had terrible photos, taken casually on phones. Brian Chesky and Joe Gebbia flew over, rented a camera, and helped landlords shoot professional photos door-to-door. Listings with professional photos saw booking rates jump 2.5x immediately.&lt;/p&gt;

&lt;p&gt;DoorDash was even more direct. The founders started as delivery drivers themselves, taking PDF menus from restaurants, using their own phone numbers as customer service. When someone ordered, they'd bike there and deliver it themselves. Pinterest's Ben Silbermann noticed early users were mostly design enthusiasts, so he went alone to offline meetups for design bloggers and recruited them one by one. Tinder took it further—Whitney Wolfe went to college sorority parties to promote it, with entry requiring the Tinder app installed on your phone.&lt;/p&gt;

&lt;p&gt;The common thread in all these tactics: founders personally handled things, serving users one by one, doing things that couldn't scale. But because of this, that first batch of people genuinely felt "this thing is different," and then spontaneously spread the word for you.&lt;/p&gt;

&lt;p&gt;Thinking about it, DeepSeek's path is essentially the same. Not through advertising, but because the product itself was so good others couldn't help but tell people.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Two-Week Lead, Then Back to Zero
&lt;/h2&gt;

&lt;p&gt;Putting all this together, my understanding of product competition has definitely changed.&lt;/p&gt;

&lt;p&gt;Previously, gaps were created through resources and channels—whoever had more ads and stronger brands occupied more market. That advantage could last a long time. Now that's not working. Distribution infrastructure is too developed; something truly good can spread at almost zero cost. DeepSeek reached 100 million in seven days—the fastest ever. The supply side is completely different too; building a feature used to take months, now Claude Code ships daily, and DingTalk open-sources today with Feishu following tomorrow.&lt;/p&gt;

&lt;p&gt;You do well, and your lead might last two to three weeks. If you don't keep getting better during those weeks, users will switch without hesitation. Claude Code and Codex are in this state now—you ship a feature, I match it, users switch between both.&lt;/p&gt;

&lt;h2&gt;
  
  
  The New Decisive Factor
&lt;/h2&gt;

&lt;p&gt;So lately, I've become increasingly numb to "keeping an eye on competitors." I used to think this was basic product work: competitive analysis, differentiation positioning, finding white space. Now, in the AI space, spending too much time watching what others are doing is worse than spending time with your own users. The landscape you see today might change in two weeks.&lt;/p&gt;

&lt;p&gt;I now think what really matters is whether the product itself works, whether users can feel "this is actually different." DeepSeek spent nothing on promotion, Claude Code barely does any acquisition, yet users are flooding in. Then there's the relationship with early users—how to find those who genuinely think this is cool and make them co-builders. Stripe's "Collison Installation" established this tight relationship between founders and users. In the early days, that relationship is worth far more than ten thousand users from ads.&lt;/p&gt;

&lt;p&gt;The AI era does give product people more choices and freedom—you can focus more on the product itself rather than constantly competing over who has more resources and louder voices. Of course those things still matter, but they're no longer overwhelming.&lt;/p&gt;

&lt;p&gt;At the end of the day, there's really just one question: Are you creating value, or just consuming others' attention?&lt;/p&gt;

&lt;p&gt;Once you figure that out, a lot of second-guessing disappears.&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;Originally published at &lt;a href="https://guanjiawei.ai/en/blog/money-cant-buy-growth" rel="noopener noreferrer"&gt;https://guanjiawei.ai/en/blog/money-cant-buy-growth&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>ai</category>
      <category>product</category>
      <category>competition</category>
      <category>reflection</category>
    </item>
    <item>
      <title>The Day I Couldn't Log Into My Computer, I Decided to Build a Product</title>
      <dc:creator>guanjiawei</dc:creator>
      <pubDate>Fri, 27 Mar 2026 02:39:16 +0000</pubDate>
      <link>https://dev.to/skyguan92/the-day-i-couldnt-log-into-my-computer-i-decided-to-build-a-product-26o1</link>
      <guid>https://dev.to/skyguan92/the-day-i-couldnt-log-into-my-computer-i-decided-to-build-a-product-26o1</guid>
      <description>&lt;p&gt;I spent two days tinkering with it, but couldn't get a small tool to work.&lt;/p&gt;

&lt;p&gt;Last January, I used Claude Code for the first time. At the time I was connected to Kimi's model, trying to make a browser extension. Every time it said "it's done," but when I tested it, it didn't work. All sorts of errors, fixing them back and forth for two days, I was about to give up. Continuing to invest wasn't just about failing to solve it—it was also burning through tokens.&lt;/p&gt;

&lt;p&gt;Then Claude released Opus 4.6.&lt;/p&gt;

&lt;p&gt;I bought a membership on a whim and threw the same requirements at it. Half an hour, from scratch, done in one go. It called the browser extension to check the results itself, simulated user input to test itself, found bugs and fixed them itself, then told me: "You can try it now."&lt;/p&gt;

&lt;p&gt;I opened it and looked—it really worked.&lt;/p&gt;

&lt;p&gt;That moment gave me a strong feeling: AI capabilities seemed to suddenly cross a line at some point. At least in certain scenarios, it's not just helpful anymore—it's far beyond what you expect from it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Different Models, Absurdly Huge Gaps
&lt;/h2&gt;

&lt;p&gt;Later this experience happened again.&lt;/p&gt;

&lt;p&gt;Using Claude Code plus Opus 4.6 to make a small plugin, helping a friend install a Feishu (Lark) browser automation tool. I struggled for three to four hours; every time it said it was done, but when running there were bugs. I tried all sorts of methods, but just couldn't get it to work.&lt;/p&gt;

&lt;p&gt;Later ChatGPT released 5.4. I said, "Fine, your turn." It ran for over two hours, during which I barely intervened, and finally got it through.&lt;/p&gt;

&lt;p&gt;Honestly, by then I had no expectations left, thinking this was probably impossible within current AI capabilities. But it just did it.&lt;/p&gt;

&lt;p&gt;Same task—one model just couldn't do it no matter what, but switch to another and it worked. The gap between them isn't a matter of degree—it's a matter of can or cannot.&lt;/p&gt;

&lt;h2&gt;
  
  
  Couldn't Log Into the Computer
&lt;/h2&gt;

&lt;p&gt;Then one day, something happened.&lt;/p&gt;

&lt;p&gt;I don't know what configuration I touched while tinkering around, but suddenly my Mac couldn't log in. Enter username and password, press return, wait a moment, snap—back to the login screen. I tried several times, but just couldn't get in.&lt;/p&gt;

&lt;p&gt;Before, I might have thought: ask a friend for help? Contact Apple engineers? But thinking about that process—logging into the website, queuing, making an appointment, remote access—and it might not even get solved, just thinking about it gave me a headache.&lt;/p&gt;

&lt;p&gt;Then I thought: can I have AI on another machine fix this?&lt;/p&gt;

&lt;p&gt;The problem was, I couldn't even open my computer. How would the AI connect?&lt;/p&gt;

&lt;p&gt;I started searching through the network, seeing which devices still worked. Looking around, I found one machine previously connected to the Zhipu AI model. The others either didn't have agents or couldn't connect; only this one still worked. Fine, might as well try—it was a long shot.&lt;/p&gt;

&lt;p&gt;I used that machine to remotely connect to the problematic Mac and had AI troubleshoot. It checked on one side, while I kept trying to log in on this end, feeding new error messages back to it.&lt;/p&gt;

&lt;p&gt;Half an hour later, it found the cause. Some configuration file had issues; it helped me change it back, and I immediately logged in.&lt;/p&gt;

&lt;p&gt;The feeling at that moment was specific. When you really need help and the original methods can't help, an AI agent that can connect to your machine—even if it's not the best model—can just get things done.&lt;/p&gt;

&lt;h2&gt;
  
  
  Becoming More Like a Hacker
&lt;/h2&gt;

&lt;p&gt;After fixing the computer, I started thinking about something else: can I use a coding agent even when I'm not in front of the computer?&lt;/p&gt;

&lt;p&gt;I asked Claude Code what to do. It recommended Tailscale—I had no idea what that was before. After setting it up, all my machines connected to a virtual network. Even my phone could connect.&lt;/p&gt;

&lt;p&gt;Since then, walking down the street thinking of something, I pull out my phone to connect to my computer and have the coding agent help me work. Actually, remotely controlling AI to do things has been possible for a long time—most people just don't know how.&lt;/p&gt;

&lt;p&gt;Later I encountered another problem: when the phone network disconnects or the app exits, the running task breaks. I asked AI again, and it taught me to use tmux to let programs run persistently in the background. After setting it up, tasks can keep running, and I can check progress anytime by connecting in.&lt;/p&gt;

&lt;p&gt;That period was quite magical. Remote control, background running, multi-machine networking—I used to think these were programmer things, far from me. I asked AI one question and it was all done. I really felt like I had become the hacker I imagined before.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Thing I Feared Most
&lt;/h2&gt;

&lt;p&gt;But after getting used to this capability, the thing I feared most changed.&lt;/p&gt;

&lt;p&gt;I'm not afraid of the computer breaking. I'm afraid of the agent dying.&lt;/p&gt;

&lt;p&gt;Several times, a coding agent on some machine suddenly wouldn't open due to software updates or configuration changes. You've completely gotten used to going to it when there's a problem; when it suddenly disappears, you don't even know how to fix it.&lt;/p&gt;

&lt;p&gt;I later figured out a pattern: as long as there's at least one machine in the network with a working coding agent, I can use it to fix other machines—just like fixing that Mac that couldn't log in before.&lt;/p&gt;

&lt;p&gt;But what if the last agent dies too?&lt;/p&gt;

&lt;p&gt;Honestly, that would really make me panic.&lt;/p&gt;

&lt;h2&gt;
  
  
  An Idea
&lt;/h2&gt;

&lt;p&gt;Thinking about this, I realized the problem wasn't just mine.&lt;/p&gt;

&lt;p&gt;More and more friends around me are starting to use AI tools and install coding agents, and when they work well, it feels great. But opening channels and configuring environments—these prerequisite tasks are much harder than imagined. Once something breaks, most people don't know how to fix it.&lt;/p&gt;

&lt;p&gt;You were enjoying the convenience, and suddenly it stops working. This feeling of disappointment is worse than never having used it at all.&lt;/p&gt;

&lt;p&gt;I wondered: can we stop making everyone fiddle with configurations themselves? When you need help, open the terminal, type one command, and have an AI connect to help solve the problem?&lt;/p&gt;

&lt;p&gt;This was the starting point for building Aima Service.&lt;/p&gt;

&lt;h2&gt;
  
  
  Aima Service
&lt;/h2&gt;

&lt;p&gt;I spent time with the team turning this idea into a product.&lt;/p&gt;

&lt;p&gt;Several AI agents run in the backend, on standby 24/7. You don't need technical knowledge in advance—open the terminal with one command and it starts.&lt;/p&gt;

&lt;p&gt;For example, if the coding agent won't open, it helps you fix it. Want to install a new AI tool or programming environment? Hand it over. Device has strange technical failures? Let it troubleshoot.&lt;/p&gt;

&lt;p&gt;We don't guarantee 100% success. AI capabilities are still growing; some scenarios that can't be handled today might work later. But the current success rate is higher than most people think.&lt;/p&gt;

&lt;p&gt;We're offering large amounts of free credits now—no need to pay, just log in and use it.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI's Most Moving Moments
&lt;/h2&gt;

&lt;p&gt;Finally, something unrelated to the product.&lt;/p&gt;

&lt;p&gt;My wife doesn't usually use AI much, and doesn't really know when to use it. Once she had a tricky situation communicating with her child's teacher and didn't know how to phrase things. She tried asking DeepSeek. It gave some communication advice, helped her avoid several phrasings that could easily cause misunderstandings, and even encouraged her a bit.&lt;/p&gt;

&lt;p&gt;She told me: AI is much more powerful than she imagined.&lt;/p&gt;

&lt;p&gt;Since then she started actively seeking AI's help.&lt;/p&gt;

&lt;p&gt;I've seen similar changes in myself, in her, and in many friends around me. People's views on AI often change not because of daily convenience, but at moments when they really need help and other methods don't work—they're moved.&lt;/p&gt;

&lt;p&gt;I want more people to have such moments. That's what Aima Service is doing.&lt;/p&gt;

&lt;p&gt;Go to &lt;a href="https://aimaserver.com" rel="noopener noreferrer"&gt;aimaserver.com&lt;/a&gt; and give it a try.&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;Originally published at &lt;a href="https://guanjiawei.ai/en/blog/why-i-built-aima-service" rel="noopener noreferrer"&gt;https://guanjiawei.ai/en/blog/why-i-built-aima-service&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>product</category>
      <category>aimaservice</category>
    </item>
    <item>
      <title>Something More Worth Discussing Than AI Anxiety</title>
      <dc:creator>guanjiawei</dc:creator>
      <pubDate>Wed, 25 Mar 2026 03:09:47 +0000</pubDate>
      <link>https://dev.to/skyguan92/something-more-worth-discussing-than-ai-anxiety-2m2i</link>
      <guid>https://dev.to/skyguan92/something-more-worth-discussing-than-ai-anxiety-2m2i</guid>
      <description>&lt;p&gt;Yesterday I saw the news that Zhang Xuefeng had passed away—41 years old, sudden cardiac death. Three days prior, he was still checking in on WeChat Moments about his running, with a monthly mileage of 72 kilometers. He felt unwell after an afternoon run, was taken to the hospital, and three hours later, he was gone.&lt;/p&gt;

&lt;p&gt;He left behind nine companies, an education group valued at hundreds of millions, and a nine-year-old daughter.&lt;/p&gt;

&lt;p&gt;Just a few days ago, I wrote an article about AI entrepreneurship mentioning that the body is the hardest infrastructure in this wave of transformation. Work intensity is increasing, the pace is accelerating, but the body doesn't become more durable just because technology advances. I didn't expect to see news like this the very next day.&lt;/p&gt;

&lt;p&gt;This isn't an isolated case. Ding Yun from Huawei, 53, died after running 28 kilometers. Sun Jian, chief scientist at Megvii, 45, passed away suddenly in the early morning. Zhang Rui, founder of Chunyuyisheng (Spring Rain Doctor), 44, died of a heart attack. Approximately 550,000 people die of sudden cardiac death in China each year—that's 1,500 people daily. These numbers existed before AI emerged, and the work intensity of the AI era will only make the situation more severe.&lt;/p&gt;

&lt;p&gt;But what occupied my thoughts for a long time wasn't the topic of health preservation. It was that when facing these events, all the narratives about technological transformation and productivity explosions suddenly seemed very light.&lt;/p&gt;

&lt;h2&gt;
  
  
  Made $100 Million, Then What
&lt;/h2&gt;

&lt;p&gt;The experience of Peter Steinberger, founder of OpenClaw, can be viewed alongside this.&lt;/p&gt;

&lt;p&gt;He spent 13 years building a PDF SDK serving nearly a billion users, with clients including Apple, Adobe, and Dropbox. In 2021, he sold the company to Insight Partners for approximately $100 million.&lt;/p&gt;

&lt;p&gt;Then he fell apart.&lt;/p&gt;

&lt;p&gt;He described himself as "completely shattered." Waking up every morning with nothing to look forward to, no real challenges. He tried traveling, socializing, therapy—nothing filled that void. He told Lex Fridman: "If you wake up with nothing to look forward to, that boredom comes very quickly." By the end of 2024, he couldn't write a single line of code.&lt;/p&gt;

&lt;p&gt;Later, he regained his passion because of AI agents, creating one of the fastest-growing open-source projects on GitHub in three months. But compared to the comeback itself, I'm more concerned about that blank period. A person who had already "succeeded" could still get completely stuck when faced with questions of meaning.&lt;/p&gt;

&lt;p&gt;After external drivers disappear, what keeps a person moving forward? This question isn't just his.&lt;/p&gt;

&lt;h2&gt;
  
  
  The $120 Billion Deficit of Meaning
&lt;/h2&gt;

&lt;p&gt;On one side is AI anxiety; on the other is AI metaphysics.&lt;/p&gt;

&lt;p&gt;During the 2025 Spring Festival, DeepSeek fortune-telling hit the hot search lists. Input your birth date and time, and AI will cast your astrological chart, read your wealth luck, and predict your marriage. Zi Wei Dou Shu, Bazi (Eight Characters), Tarot, I Ching—all included, with more interactive engagement than human fortune-tellers.&lt;/p&gt;

&lt;p&gt;This isn't a niche phenomenon. iiMedia Research predicts that China's AI fortune-telling market will exceed 120 billion yuan in 2025, with an annual growth rate of 35%. One leading platform has over 80 million registered users, conducting 2 million divinations daily. The user profile is also intriguing: ages 20 to 35 account for 73%, and those with bachelor's degrees or higher make up 82%.&lt;/p&gt;

&lt;p&gt;It's not elderly people passing time. It's young, educated people seeking some kind of certainty from an algorithm.&lt;/p&gt;

&lt;p&gt;I don't think this is a resurgence of superstition. It's more like a symptom. When technological change becomes too violent and the old narratives—work hard and you'll succeed, having a house and car means a good life—start to loosen, people instinctively search for more fundamental things to hold onto. Where do I come from? Where am I going? GPT can run all it wants, but it can't answer these. Rather, the faster technology moves, the more unavoidable these questions become.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI Isn't Here to Steal Your Job
&lt;/h2&gt;

&lt;p&gt;The logic chain behind most people's anxiety goes like this: AI can do the work → my job gets replaced → I have no value.&lt;/p&gt;

&lt;p&gt;But there's a hidden premise in this chain: human value equals labor output.&lt;/p&gt;

&lt;p&gt;Pull it out and look at it—this premise is actually quite suspicious.&lt;/p&gt;

&lt;p&gt;If AI dramatically increases total social productivity, with digital employees running 24/7 and marginal costs approaching zero, then why does everyone still have to labor personally to obtain resources? With cheaper productivity that doesn't eat, sleep, or need social insurance, logically people should be liberated. To bask in the sun, to do things they're passionate about, to create something for fun or purely for leisure. Wouldn't that be a better life?&lt;/p&gt;

&lt;p&gt;But people don't think about this. What they anxiously fear is having their job stolen. Actually, what they should be anxious about is something else: how the fruits of production are distributed.&lt;/p&gt;

&lt;p&gt;Productivity is becoming increasingly abundant, but the distribution mechanism is still the old system—capital concentrated in the hands of a few, operating through market prices. The result is that a tiny minority decides where society's major resources go. Technology amplifies existing disparities, not equality.&lt;/p&gt;

&lt;p&gt;Rather than panicking, we should seriously discuss: when technological transformation reaches this level, how do we achieve more reasonable redistribution? After people's time is liberated, where should needs be directed?&lt;/p&gt;

&lt;h2&gt;
  
  
  We're Missing a Positive Vision of the Future
&lt;/h2&gt;

&lt;p&gt;I was chatting with a friend some time ago, and he said something that touched me deeply: what we're most lacking right now is effective imagination about the future.&lt;/p&gt;

&lt;p&gt;Think about it carefully—it's true.&lt;/p&gt;

&lt;p&gt;For decades, we actually had a consensus about what the future looked like. Economic growth, urbanization, globalization, continuously rising living standards. This framework was strong enough that it didn't need special discussion. The entire society was filled with a thriving sense of expectation.&lt;/p&gt;

&lt;p&gt;Now that consensus has shattered. Regarding the future, most of the imagination we have comes from Hollywood. &lt;em&gt;The Matrix&lt;/em&gt;, &lt;em&gt;Blade Runner&lt;/em&gt;—the undertones are all dystopian: wealth disparity, corporate rule, technology out of control. Games occasionally offer interesting visions of social interaction in virtual worlds, but overall, exciting positive visions are painfully scarce.&lt;/p&gt;

&lt;p&gt;This is quite dangerous. Pessimistic imagination certainly has its value, but if that's all we have, people are left with only fear and defense. A society driven by fear makes every choice a contractionary one.&lt;/p&gt;

&lt;h2&gt;
  
  
  Technology Is Neutral; Choices Aren't
&lt;/h2&gt;

&lt;p&gt;Technology itself isn't good or evil. How it's used, how it's distributed, how society organizes itself to adapt to it—these determine the outcomes.&lt;/p&gt;

&lt;p&gt;These aren't questions engineers can answer. Economists, sociologists, policymakers—these are the people who should step to the forefront precisely when technology is running fastest.&lt;/p&gt;

&lt;p&gt;AI shakes things much deeper than most people realize. The meaning of labor, the logic of wealth, how time should be spent. These assumptions haven't been truly questioned for decades, and now they're all loosening.&lt;/p&gt;

&lt;p&gt;The barriers to content creation are falling, and the costs of film, television, and cultural products are dropping. Under these conditions, earnestly imagining a positive vision of the future and investing in it is no longer a luxury. The United States is retreating from leading such imagination; if someone is going to take up the mantle, now is the time.&lt;/p&gt;

&lt;p&gt;Back to the beginning. Zhang Xuefeng is gone, at 41. In the face of this, technological competition, industry anxiety—honestly, none of it is worth that much emotional energy. What deserves serious thought are those old questions: how to make society a little better, how to let people live more like human beings.&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;Originally published at &lt;a href="https://guanjiawei.ai/en/blog/beyond-ai-anxiety" rel="noopener noreferrer"&gt;https://guanjiawei.ai/en/blog/beyond-ai-anxiety&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>ai</category>
      <category>thinking</category>
      <category>society</category>
    </item>
    <item>
      <title>Software Bidding Is on the Brink of Collapse</title>
      <dc:creator>guanjiawei</dc:creator>
      <pubDate>Tue, 24 Mar 2026 16:22:45 +0000</pubDate>
      <link>https://dev.to/skyguan92/software-bidding-is-on-the-brink-of-collapse-538n</link>
      <guid>https://dev.to/skyguan92/software-bidding-is-on-the-brink-of-collapse-538n</guid>
      <description>&lt;p&gt;A few days ago I had dinner with a friend who works in government-enterprise software. He said he's bid on seven or eight tenders this year, won one, and the quoted price was pressed down to one-third of what it used to be. It wasn't that the client was deliberately lowballing—someone really was bidding at that price, and the deliverable looked decent enough.&lt;/p&gt;

&lt;p&gt;He asked me what was going on. I told him to look into how fast AI writes bid documents and builds software now.&lt;/p&gt;

&lt;p&gt;This got me thinking for quite a while. The bidding system in the software industry might not hold out much longer.&lt;/p&gt;

&lt;h2&gt;
  
  
  How the Old System Worked
&lt;/h2&gt;

&lt;p&gt;China's government procurement market is massive. In 2024, total government procurement nationwide exceeded 3.37 trillion yuan, with open bidding accounting for 76.63%. IT-related government procurement was roughly 143.3 billion yuan, down 4.5% year-over-year, but the number of projects actually grew by 21.5%—nearly 79,000 projects. In other words, more projects, but smaller individual contracts.&lt;/p&gt;

&lt;p&gt;In the past, software bidding operated stably because of several barriers.&lt;/p&gt;

&lt;p&gt;First, &lt;strong&gt;qualifications&lt;/strong&gt;. This ensured bidding companies were legitimate, with corresponding capabilities and experience. But this barrier never blocked too many people—companies that wanted in always found a way. Restrict it too much and you lose the meaning of competition.&lt;/p&gt;

&lt;p&gt;Then, &lt;strong&gt;information asymmetry&lt;/strong&gt;. Many companies only learned about tenders after the bid was announced, leaving insufficient preparation time, or never knew about them at all. The window between announcement and deadline objectively kept some competitors out. Even if you knew, preparing a proper bid document required massive time and manpower investment; companies that weren't fully prepared suffered during technical evaluation.&lt;/p&gt;

&lt;p&gt;Finally, &lt;strong&gt;commoditized comparison&lt;/strong&gt;. This is the core design logic of bidding: treat software as a standardized commodity, write clear functional parameters, and score against them item by item. If you hadn't built similar systems before, you couldn't produce many features, couldn't describe the technology clearly, and certainly had nothing to show during demos.&lt;/p&gt;

&lt;p&gt;These barriers combined to keep competition within a relatively controllable range. For a long time, the system worked well enough.&lt;/p&gt;

&lt;h2&gt;
  
  
  Barriers Loosening Simultaneously
&lt;/h2&gt;

&lt;p&gt;AI is simultaneously loosening these barriers.&lt;/p&gt;

&lt;p&gt;Qualifications need little explanation—they were never a real moat anyway.&lt;/p&gt;

&lt;p&gt;What's changing fastest is information asymmetry. Quite a few companies are already using AI tools to scan bidding information across the entire web. Bailian Zhineng's "Zhiliao Bid Intelligence" covers over 100,000 bidding websites nationwide, accumulating more than 300 million bidding records, updating 20 million daily, with AI prediction accuracy for winning bids exceeding 70%. Qianlima Bidding Network integrated DeepSeek and other large models in 2025, updating 300,000 bidding information items daily, with AI monitoring the entire process from project approval to bid announcements, automatically screening high-potential projects.&lt;/p&gt;

&lt;p&gt;You used to miss bids simply because you didn't know they existed; now AI is watching constantly. Don't have time to prepare the bid documents? AI can help with that too.&lt;/p&gt;

&lt;p&gt;In bid document generation, the market grew 230% year-over-year in 2025. iFlytek's "Spark Bidding" claims to compress bid document preparation from 30 days to 3 days, improve document compliance by 90%, and increase win rates by 40%. Tai Toubiao claims to generate thousand-page bid documents in 30 minutes, extracting over 200 key elements in 3 minutes. Kuai Biaoshu AI directly promises rapid generation in 10 minutes.&lt;/p&gt;

&lt;p&gt;I can't fully verify these vendors' numbers. But the direction is clear: the time and manpower needed to prepare a bid document is shrinking dramatically. The gap between well-prepared and poorly-prepared bids used to be huge; now that gap is narrowing fast.&lt;/p&gt;

&lt;p&gt;The most critical barrier is the last one.&lt;/p&gt;

&lt;p&gt;Bidding requires clear functional parameter descriptions, then comparing finished or near-finished products. Previously, if you hadn't built similar systems, you truly couldn't produce anything. But now, the clearer the functional description, the faster AI builds it. Throw the parameter requirements from the bidding document to a coding agent, spend a few thousand dollars plus a few days, and you can produce something demonstrable. Screenshots, feature points, even something running that evaluation experts can see.&lt;/p&gt;

&lt;p&gt;Among Y Combinator's Winter 2025 batch, 25% of startups had 95% of their code generated by LLMs. YC CEO Garry Tan put it directly: you no longer need a 50-to-100-person engineering team. Cursor DAU exceeds one million, with annualized revenue breaking \$2 billion by early 2026. 84% of developers are using or planning to use AI coding tools; 41% of code already involves AI generation.&lt;/p&gt;

&lt;p&gt;Is that enough to handle bidding? Yes. The cost of subsequent refinement and delivery is also dropping significantly.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Used to Be Worth \$500k, Now \$50k Gets You In
&lt;/h2&gt;

&lt;p&gt;Previously, a software project quoting 500,000 yuan seemed normal—development and testing alone required considerable manpower and time. Now these costs are compressed to one-tenth or even less. Some dare to enter at 50,000 yuan, with AI-written bid documents, products built by coding agents, and only minimal actual costs borne by themselves.&lt;/p&gt;

&lt;p&gt;Making matters worse, China's enterprise SaaS industry is already struggling. EY's 2024 report shows Chinese enterprise SaaS listed companies averaged negative net profit margins over the past four and a half years. Gross margins under 60%, sales expense ratio 30%, R&amp;amp;D expense ratio 20%. The industry as a whole is still losing money.&lt;/p&gt;

&lt;p&gt;The software outsourcing field is worse. In 2025, the industry's real state was summarized as a "three-piece set": unfinished projects, wage arrears, and layoff waves. SaaS products targeting SMEs have become so homogenized they're competing on manpower. IBISWorld estimates 2025 industry profit margins at just 12.5%.&lt;/p&gt;

&lt;p&gt;Against this backdrop, bidding quotes continue to be driven down. The entire system is starting to look somewhat absurd.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Path LLMs Took Last Year
&lt;/h2&gt;

&lt;p&gt;Actually, LLMs already walked this exact path last year.&lt;/p&gt;

&lt;p&gt;In early 2025, DeepSeek R1 was released, matching or exceeding OpenAI o1 on key benchmarks, with API pricing at roughly 4% of OpenAI's. For the same inference task costing \$100 on OpenAI o1, DeepSeek cost about \$3.60. Training costs were reportedly around \$6 million.&lt;/p&gt;

&lt;p&gt;The Chinese market subsequently erupted into a price war. ByteDance's Doubao priced its Pro-32k model at 0.0008 yuan per thousand tokens, 99.3% below the industry average, with daily token usage breaking 500 billion by July 2024, up 22 times from May. Alibaba's Tongyi Qianwen slashed Qwen-Long's input price from 0.02 to 0.0005 yuan, a 97% cut. Baidu directly announced ERNIE Speed and ERNIE Lite would be free, with all Wenxin models becoming free from April 2025.&lt;/p&gt;

&lt;p&gt;A RAND Corporation report found Chinese AI models cost roughly one-quarter to one-sixth of comparable US systems.&lt;/p&gt;

&lt;p&gt;Top-tier models are all open source now; paying for a model itself no longer makes sense. The ending everyone saw later was that the model business became services around models. Helping enterprises use models well, doing post-training, industry adaptation—these have value. The model itself is no longer the transaction subject.&lt;/p&gt;

&lt;h2&gt;
  
  
  Software Is Taking the Same Path
&lt;/h2&gt;

&lt;p&gt;Now looking back at software.&lt;/p&gt;

&lt;p&gt;With production costs falling to sufficiently low levels, demand-side parties will eventually ask: why organize such complex procurement processes to buy this thing?&lt;/p&gt;

&lt;p&gt;The logic is identical to models. When supply-side costs approach zero, organizing complex transactions specifically for it loses meaning.&lt;/p&gt;

&lt;p&gt;How can the bidding system respond? One direction is to stop comparing functional parameters and instead compare experience and cases. How many clients have you served, how long has the system been running, do you have large-scale usage records? But this violates the original intent of bidding. The whole point of bidding was to compare things as standardized commodities to ensure fair competition. If it ultimately comes down to comparing experience and connections, how is that different from skipping the bid and directly appointing a vendor?&lt;/p&gt;

&lt;p&gt;The problem is stuck here. Compare functionality, and AI can help you build it quickly—no differentiation. Compare experience and resources, and that's not what bidding is supposed to do.&lt;/p&gt;

&lt;h2&gt;
  
  
  Not Far Off
&lt;/h2&gt;

&lt;p&gt;Many people think AI's impact on their industry is still years away. Bidding might not have that long.&lt;/p&gt;

&lt;p&gt;In 2024, IT government procurement project numbers grew 21.5%, but total value dropped 4.5%. More projects, less money. Add AI's compression of software costs on top of this, and we hit the tipping point faster than most expect.&lt;/p&gt;

&lt;p&gt;The last time I saw a similar situation was the LLM market in early 2025. From DeepSeek's open source to major vendors' comprehensive price wars, it took just a few months. Everyone was forced to transform, no longer selling the models themselves but selling services around models.&lt;/p&gt;

&lt;p&gt;The software industry will likely follow the same path. The transaction value of software itself will continue to shrink. What can actually be sold for money are services surrounding software: helping clients clarify requirements, continuous operations and maintenance, data migration, process transformation.&lt;/p&gt;

&lt;p&gt;Back to the friend's confusion at the beginning. He's thinking about how to adapt to the new price competition. But perhaps what he should be thinking about isn't how to win on price, but how much longer this game itself can last.&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;Originally published at &lt;a href="https://guanjiawei.ai/en/blog/software-bidding-is-broken" rel="noopener noreferrer"&gt;https://guanjiawei.ai/en/blog/software-bidding-is-broken&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>ai</category>
      <category>reflections</category>
      <category>industry</category>
    </item>
    <item>
      <title>What's Blocking AI Startups Isn't Technology</title>
      <dc:creator>guanjiawei</dc:creator>
      <pubDate>Mon, 23 Mar 2026 15:13:28 +0000</pubDate>
      <link>https://dev.to/skyguan92/whats-blocking-ai-startups-isnt-technology-2a5o</link>
      <guid>https://dev.to/skyguan92/whats-blocking-ai-startups-isnt-technology-2a5o</guid>
      <description>&lt;p&gt;At lunch today, I overheard most of a conversation from the table next to mine.&lt;/p&gt;

&lt;p&gt;They were probably peers—there are a lot of tech companies in this area. The first half was all complaints about the changes AI had brought, saying software is becoming less valuable, that they now need to find two or three times as many clients to meet the same performance targets as before, and that they feel more exhausted.&lt;/p&gt;

&lt;p&gt;Then suddenly the tone shifted. Someone suggested whether they could build a replacement for some HR-related service themselves. Another person immediately chimed in, saying they knew people with resources, and could this be turned into a business?&lt;/p&gt;

&lt;p&gt;They had been anxious just moments before; now they were excited.&lt;/p&gt;

&lt;h2&gt;
  
  
  Anxiety and Excitement—Everyone Has Them
&lt;/h2&gt;

&lt;p&gt;It wasn't just that table. In elevators, at restaurants—casual conversations these days are basically all about AI. It's the same with friends around me: anxious that their old rhythm has been disrupted, excited that there might be new opportunities.&lt;/p&gt;

&lt;p&gt;These two emotions alternate. Everyone is the same. I think this is just the new normal.&lt;/p&gt;

&lt;h2&gt;
  
  
  Questioned After Nine Days Without a Release
&lt;/h2&gt;

&lt;p&gt;The pace of change is indeed accelerating.&lt;/p&gt;

&lt;p&gt;Today OpenClaw released their latest version. Before that, there had been no updates for nine days, and many in the community were asking: What's going on? Are they cooking up something big?&lt;/p&gt;

&lt;p&gt;Nine days. People thought nine days of silence was worth asking about.&lt;/p&gt;

&lt;p&gt;It turned out they were indeed cooking up something big—a complete overhaul of permissions and ecosystem. But I'm more interested in the rhythm itself. Previously, when building software, planning in weekly cycles seemed normal. Now if an idea hasn't shown any shadow after a week, people basically stop paying attention.&lt;/p&gt;

&lt;h2&gt;
  
  
  First Bottleneck: Passion
&lt;/h2&gt;

&lt;p&gt;At this speed, what actually blocks startup teams?&lt;/p&gt;

&lt;p&gt;I think the first thing is passion. Not the motivational-speaker kind.&lt;/p&gt;

&lt;p&gt;Changes are too fast, and attention shifts too quickly. Trying to persist in such a massive wave purely for profit is unrealistic. Machine production speed has increased, but demand hasn't kept up. People still make decisions slowly; consumers are too lazy to think or switch. This inertia is currently the biggest source of stickiness, while technology itself offers no real moat.&lt;/p&gt;

&lt;p&gt;You have to find joy in the thing itself. Otherwise, you won't last long.&lt;/p&gt;

&lt;h2&gt;
  
  
  Second Bottleneck: Focus
&lt;/h2&gt;

&lt;p&gt;Twenty percent of energy here, thirty percent there, forty percent in another direction. You could barely get away with this before. Not anymore.&lt;/p&gt;

&lt;p&gt;The entire industry feels like a 100-meter sprint. If the core team is still pacing themselves for a marathon—or even a brisk walk—they'll soon find people passing them by in waves until they can't see them anymore.&lt;/p&gt;

&lt;p&gt;Direction follows the founding team's level of commitment. Whether a small team can fully dedicate themselves to one thing—this focus directly determines whether they can keep up with the pace. Teams with divided attention won't survive long at this velocity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Third Bottleneck: Infrastructure and Process
&lt;/h2&gt;

&lt;p&gt;Some bottlenecks are surprisingly not at the technical level, but in very basic places.&lt;/p&gt;

&lt;p&gt;The product is ready, you're preparing to launch, and you discover the domain needs to be filed for registration—5 to 20 business days. Overseas? Register the domain and it's usable immediately. Just this one step differs by an order of magnitude in speed.&lt;/p&gt;

&lt;p&gt;Similar issues include internal organizational approval processes, spending velocity, and decision-making chains. Before, these were just a bit slower, nothing noticeable. But when technology has reached this magnitude of speed, the surrounding parts that can't keep up become glaring obstacles.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fourth Bottleneck: Physical Health
&lt;/h2&gt;

&lt;p&gt;This one surprises many people.&lt;/p&gt;

&lt;p&gt;Everyone thought AI would make things easier. But currently, that's not the case. AI still can't operate autonomously 24/7 unsupervised, so human consumption has actually increased.&lt;/p&gt;

&lt;p&gt;You can run five agents in parallel, building and testing simultaneously. But human attention has to switch rapidly between several tasks, constantly checking results and correcting direction.&lt;/p&gt;

&lt;p&gt;Honestly, the mental energy consumed by directing a team of agents is greater than directing a team of humans. It's like doing high-intensity workshops all day long—not occasionally, but every single day.&lt;/p&gt;

&lt;p&gt;If your body can't handle it, you'll fall behind quickly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Transitional State
&lt;/h2&gt;

&lt;p&gt;My feeling right now is that this is a transitional period.&lt;/p&gt;

&lt;p&gt;If some core directions and value chains begin to be validated, you can build a 24/7 AI loop around them, so things no longer rely entirely on human energy. Plus, if you can attract more contributors to participate, it's no longer just one person or a small team pushing forward on tokens alone—more people's brainpower and resources can join in.&lt;/p&gt;

&lt;p&gt;This is already happening with our own open-source projects. Someone submitted a feature entirely using an AI coding agent, and it got merged—and this person isn't a programmer. As long as you have ideas, you can participate; identity shifts from consumer to builder.&lt;/p&gt;

&lt;p&gt;The people at the table next to me during lunch—anxious in the first half, excited in the second—these two states will probably persist for a long time. It's just that in this process, you have to know exactly where you're stuck.&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;Originally published at &lt;a href="https://guanjiawei.ai/en/blog/ai-startup-bottleneck" rel="noopener noreferrer"&gt;https://guanjiawei.ai/en/blog/ai-startup-bottleneck&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>ai</category>
      <category>startup</category>
      <category>thoughts</category>
    </item>
    <item>
      <title>The Open Source Community Is Undergoing a 'Short-Video' Explosion</title>
      <dc:creator>guanjiawei</dc:creator>
      <pubDate>Mon, 23 Mar 2026 02:04:16 +0000</pubDate>
      <link>https://dev.to/skyguan92/the-open-source-community-is-undergoing-a-short-video-explosion-1khj</link>
      <guid>https://dev.to/skyguan92/the-open-source-community-is-undergoing-a-short-video-explosion-1khj</guid>
      <description>&lt;p&gt;Recently, while chatting with friends about what's happening in the software industry, I often compare it to short videos. At first I thought it was just an analogy, but the more I thought about it, the more I realized the underlying logic is almost identical.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Short Videos Took Off
&lt;/h2&gt;

&lt;p&gt;Initially, watching videos came with costs. Data was expensive—a video could be dozens of megabytes. Who dared to binge-watch when running low on data at the end of the month? Later, 4G became widespread, followed by 5G. Data plans got cheaper and cheaper, making it effortless to watch videos anytime, anywhere. The barrier to watching videos was essentially eliminated.&lt;/p&gt;

&lt;p&gt;The production side was changing simultaneously. In the past, shooting and editing a video yourself felt like building a rocket, to exaggerate a bit. Then Jianying (CapCut) came out, along with live streaming—you could just point the camera at your face and shoot, and people would watch immediately. Production costs were compressed close to zero.&lt;/p&gt;

&lt;p&gt;When both ends dropped to near zero, the platform in the middle exploded.&lt;/p&gt;

&lt;p&gt;I used to look at this industry's data at my company. In 2017, the short video advertising market was roughly 50-60 billion RMB. A few years later, it surged to over 200 billion, and later with e-commerce and live streaming added, the entire ecosystem hit over 400 billion. The lion's share of profits was captured by Douyin (TikTok) and Kuaishou.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Software Industry Is Reliving This
&lt;/h2&gt;

&lt;p&gt;Previously, for an open source project, downloading and installing posed quite a barrier for ordinary people. The documentation was mediocre, environment configuration was full of pitfalls, and non-technical people basically couldn't get it working. Now it's different—when you encounter problems, just ask ChatGPT or DeepSeek, and it guides you step by step. Most open source projects can get running. Acquiring software has become much simpler.&lt;/p&gt;

&lt;p&gt;GitHub's data tells the story. In 2025, global developers exceeded 180 million, with over 36 million new additions in one year—equivalent to one new registration every second. India added 5.2 million in a year, while Indonesia grew from 900,000 in 2020 to over 4.3 million. 80% of new users used Copilot in their first week. Many aren't programmers in the traditional sense; AI has lowered the barrier to entering the open source community.&lt;/p&gt;

&lt;p&gt;After our team started using open source community infrastructure, we found many colleagues had actually used it before, or at least registered accounts. With the proliferation of AI, people can now simply ask, "How do I install and use this software?" and follow the steps to basically get it working.&lt;/p&gt;

&lt;p&gt;The changes on the production side are even more dramatic. Coding agents are driving software development costs toward zero. Previously, building an MVP required at least a small team several months. Now, one person plus a coding agent can produce something usable in a few days. Cursor has over a million DAU, and by early 2026, its annualized revenue exceeded $2 billion. 84% of developers are using or planning to use AI coding tools; 41% of code already involves AI generation.&lt;/p&gt;

&lt;p&gt;Recently, I've seen people sharing that they previously knew nothing about technology, but now have open sourced projects on GitHub with thousands of stars, which they find incredible. &lt;em&gt;MIT Technology Review&lt;/em&gt; listed "generative programming" as one of the top 10 breakthrough technologies of 2026. Entrepreneurs, marketers, and designers are all starting to build software directly with AI tools. One growth marketer built a cryptocurrency visualization app using ChatGPT plus Lovable without knowing how to code. This was unthinkable before.&lt;/p&gt;

&lt;p&gt;Production costs approach zero, acquisition costs also approach zero. As the platform in the middle, the open source community is facing the same situation as short videos.&lt;/p&gt;

&lt;h2&gt;
  
  
  OpenClaw
&lt;/h2&gt;

&lt;p&gt;OpenClaw deserves a separate discussion.&lt;/p&gt;

&lt;p&gt;This project launched only at the end of 2025, and already has over 250,000 stars—the fastest-growing open source project in GitHub history. Linux took many years to reach that number.&lt;/p&gt;

&lt;p&gt;The reaction it sparked in China is even more interesting. In March, nearly a thousand people queued outside Tencent's headquarters in Shenzhen to have engineers help them install OpenClaw. The line included students, retirees, and office workers—not just programmers. "Raising a crawfish" (a pun referring to the project's name) became a meme. Longgang District in Shenzhen offered subsidies up to 10 million RMB for teams starting businesses based on OpenClaw; Wuxi followed with 5 million. Alibaba Cloud, Tencent Cloud, ByteDance, JD.com, and Baidu have all launched their own versions.&lt;/p&gt;

&lt;p&gt;Looking at OpenClaw alone, you might think it's just a popular project. But viewed through the framework I just described, it's actually a microcosm of platform effects beginning to manifest. Producers use coding agents for rapid iteration; users leverage AI to get started quickly; GitHub handles distribution at near-zero cost. The cycle from a good software's birth to adoption by hundreds of thousands of people used to take a long time; now the speed is completely different.&lt;/p&gt;

&lt;h2&gt;
  
  
  Recommendation: Pay Attention to Open Source Communities
&lt;/h2&gt;

&lt;p&gt;Looking ahead, I believe open source communities mean to the software industry what short video platforms mean to the entertainment industry.&lt;/p&gt;

&lt;p&gt;In 2025, GitHub added 121 million new repositories—230 new projects created every minute—with over 500 million pull requests merged in a year. AI-related projects grew 178% in one year, with over 1.1 million repositories using LLM SDKs.&lt;/p&gt;

&lt;p&gt;Open source communities are no longer just places for programmers to exchange code. Production happens here, distribution happens here, collaboration happens here, and influence grows from here. With results from open source projects, you can already leverage significant traffic and resources.&lt;/p&gt;

&lt;p&gt;If you're in the software industry, whether in tech or product, I recommend seriously paying attention to open source communities. Not the "keep an eye on it" kind, but treating it as a core channel to cultivate.&lt;/p&gt;

&lt;p&gt;The driving forces behind this are AI and coding agents. But at the phenomenon level, you'll see good software in open source communities spreading and diffusing at exaggerated speeds—just like how short video platforms exploded after the widespread adoption of 4G and smartphones.&lt;/p&gt;

&lt;p&gt;For those already in the software industry, this is worth paying extra attention to.&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;Originally published at &lt;a href="https://guanjiawei.ai/en/blog/opensource-is-the-new-douyin" rel="noopener noreferrer"&gt;https://guanjiawei.ai/en/blog/opensource-is-the-new-douyin&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>thoughts</category>
    </item>
    <item>
      <title>What Agents Lack Isn't Intelligence—It's Trust</title>
      <dc:creator>guanjiawei</dc:creator>
      <pubDate>Wed, 18 Mar 2026 02:08:57 +0000</pubDate>
      <link>https://dev.to/skyguan92/what-agents-lack-isnt-intelligence-its-trust-34lk</link>
      <guid>https://dev.to/skyguan92/what-agents-lack-isnt-intelligence-its-trust-34lk</guid>
      <description>&lt;p&gt;Recently, while working on an AI product, I hit a wall.&lt;/p&gt;

&lt;p&gt;We had been building around two core philosophies. The first is &lt;strong&gt;zero-friction onboarding&lt;/strong&gt;. Open the terminal, type one line, hit enter, and you're using it. No software installation, no permission requests, no operating system security popups to deal with. Earlier during promotion, we discovered that friction during the onboarding process was the number one killer of trial rates—users would get frustrated before they even started. After achieving zero friction, success rates improved significantly and the experience felt great.&lt;/p&gt;

&lt;p&gt;The second is &lt;strong&gt;extremely powerful AI intelligence&lt;/strong&gt;. With such simple onboarding, users just need to state their requirements, and the agent handles the rest. We designed an agent team architecture combining hybrid models with multiple workers collaborating to handle complex tasks at the lowest possible cost and time.&lt;/p&gt;

&lt;p&gt;Both pillars were in place, and the results were decent.&lt;/p&gt;

&lt;p&gt;But when demonstrating it to others, the reactions were far weaker than I expected. I kept wondering where the problem was.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fear
&lt;/h2&gt;

&lt;p&gt;One day I was having dinner with a friend and discussing this when it suddenly clicked.&lt;/p&gt;

&lt;p&gt;When our product executes tasks, lines of commands pop up in the terminal. Technical friends find it interesting, saying the command choices are good and the task breakdown is well done. But for most people, when a bunch of incomprehensible code suddenly appears on screen, their first reaction isn't amazement—it's fear.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;What's this doing? Will it delete my stuff? Will it break my computer?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Previously, a user gave me feedback after using it: "Are you executing a script? What's written in that script?" I was quite puzzled at the time—why would they think that? Another person said: "Wow, it really finished! But... what exactly is this thing?"&lt;/p&gt;

&lt;p&gt;Even with a technical background, just looking at an interface doesn't let you fully understand what the agent is doing. Let alone ordinary people.&lt;/p&gt;

&lt;p&gt;No understanding, therefore fear.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Trust Is Broken
&lt;/h2&gt;

&lt;p&gt;Thinking back to the early promotion days, some people would rather have me help them remotely than let the agent do it. Even though the operation was more troublesome, they felt more at ease. Having a person there, someone to communicate with if something went wrong, gave them peace of mind. Knowing that Guan Jiawei was the one helping them do this—they trusted this person.&lt;/p&gt;

&lt;p&gt;When switched to an agent, that layer of trust disappeared.&lt;/p&gt;

&lt;p&gt;On one side is extremely powerful intelligence, making autonomous decisions and executing on your device. On the other side is completely incomprehensible output. The device is my asset; having something indescribable messing around on it makes everyone uncomfortable.&lt;/p&gt;

&lt;p&gt;The stronger the intelligence, the more incomprehensible the exposed behavior becomes, and the more afraid users get. These two things combined are dangerous.&lt;/p&gt;

&lt;p&gt;We were missing a pillar.&lt;/p&gt;

&lt;h2&gt;
  
  
  Redesigned Overnight
&lt;/h2&gt;

&lt;p&gt;After figuring this out, we redesigned the interaction overnight.&lt;/p&gt;

&lt;p&gt;Still a terminal, but what you see after opening it is completely different. The agent gives an opening line upon connecting. When researching, it says "I'm looking up relevant information"; when it finds reusable information, it tells you. Every step explains intentions in natural language: what it's preparing to do, how it decided to do it, and what it's currently executing. If it fails, it explains why and why it's changing direction.&lt;/p&gt;

&lt;p&gt;It's no longer lines of incomprehensible commands—it's a collaborator that can talk.&lt;/p&gt;

&lt;p&gt;The agent's capabilities haven't changed, but user feedback is completely different.&lt;/p&gt;

&lt;h2&gt;
  
  
  Claude Code Walked the Same Path
&lt;/h2&gt;

&lt;p&gt;After finishing, I remembered Claude Code.&lt;/p&gt;

&lt;p&gt;Initially, engineers would examine every line of code it wrote and every command it executed. Some, not feeling assured, would expand all the collapsed content and check each item. Later, they discovered that 95% of the time it wouldn't mess up, and people started collapsing the information. Executing a bash command displays just one line—just wait. After that, less and less information was displayed, and no one thought it was a problem.&lt;/p&gt;

&lt;p&gt;Someone on our team told me something. He suddenly realized one day that he had never said no to Claude Code. Every time a permission request popped up, he clicked approve. A 100% yes rate makes the step meaningless, so he directly enabled bypass permissions and let it do its thing.&lt;/p&gt;

&lt;p&gt;This isn't something you can do from day one. Handing over all permissions on the first day would make anyone panic. But after interacting for a while and confirming it won't mess up, trust naturally develops.&lt;/p&gt;

&lt;h2&gt;
  
  
  No Skipping Steps
&lt;/h2&gt;

&lt;p&gt;Building trust with the unknown is a slow process.&lt;/p&gt;

&lt;p&gt;If a product launched on day one with no explanations, automatically executing a bunch of operations on the user's device—even if the results were good—people would freak out. "What's this doing? Will it mess up my stuff?"&lt;/p&gt;

&lt;p&gt;There must be a gradual process. First let people clearly see what the agent is doing and why, confirm it won't cause problems, then slowly let go. You can't skip steps.&lt;/p&gt;

&lt;p&gt;So our product's three pillars are set: &lt;strong&gt;Zero-friction onboarding&lt;/strong&gt;, &lt;strong&gt;extremely powerful AI intelligence&lt;/strong&gt;, and &lt;strong&gt;progressive trust&lt;/strong&gt;. Translated into experience: simple, powerful, friendly, safe, and controllable.&lt;/p&gt;

&lt;p&gt;Only when all three are in place is the product ready for others to use.&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;Originally published at &lt;a href="https://guanjiawei.ai/en/blog/agent-trust-model" rel="noopener noreferrer"&gt;https://guanjiawei.ai/en/blog/agent-trust-model&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>product</category>
      <category>thoughts</category>
    </item>
    <item>
      <title>Your Prompts Might Be Undermining Your Agent</title>
      <dc:creator>guanjiawei</dc:creator>
      <pubDate>Tue, 17 Mar 2026 03:08:18 +0000</pubDate>
      <link>https://dev.to/skyguan92/your-prompts-might-be-undermining-your-agent-15k</link>
      <guid>https://dev.to/skyguan92/your-prompts-might-be-undermining-your-agent-15k</guid>
      <description>&lt;p&gt;Recently, while working on agent products, I made an interesting observation.&lt;/p&gt;

&lt;p&gt;The traditional software engineering tasks—writing APIs, architecting systems, writing tests—are becoming increasingly straightforward. Most problems follow established patterns, and past experience plus validation methods are fairly mature. Hand it to a coding agent: encounter a problem, have it solve it, test it, evaluate it. The chain is clear.&lt;/p&gt;

&lt;p&gt;For this kind of mechanical work, agents already perform quite well.&lt;/p&gt;

&lt;p&gt;But there's one area that's extremely difficult: the agent's behavior itself.&lt;/p&gt;

&lt;h2&gt;
  
  
  80% of the Value Comes from the Agent
&lt;/h2&gt;

&lt;p&gt;Currently, about 80% of the value in products and applications comes from the agent itself. Frontend, backend, database, deployment—they're all just scaffolding, when you get down to it.&lt;/p&gt;

&lt;p&gt;The paradox is this: the value of traditional software engineering is shrinking, but the new value that has emerged—agent behavior—is precisely what we have the least control over and the least idea how to study.&lt;/p&gt;

&lt;p&gt;How exactly do you build a good agent? I think this question will haunt us for quite some time to come.&lt;/p&gt;

&lt;h2&gt;
  
  
  Three Years, Five Paradigm Shifts, None Stuck
&lt;/h2&gt;

&lt;p&gt;In the three years since ChatGPT emerged, paradigms have been constantly shifting. Methods that were previously established are discovered to be lacking after a while, and then everyone switches.&lt;/p&gt;

&lt;p&gt;First came prompt engineering. Everyone's instinct was to research how to write good prompts, pondering daily how to make AI more obedient and how to embed it into business workflows.&lt;/p&gt;

&lt;p&gt;Then came RAG. This was mainly to solve the problem of insufficient context. Short windows, expensive context—8K or 32K was considered good. To make AI more useful, people fed it sliced-up knowledge. This direction had its moment of popularity, then suddenly no one talked about it anymore.&lt;/p&gt;

&lt;p&gt;The ceiling was too low. No matter what you did, you couldn't reach ideal results. Agent accuracy would hit 80%, 85%, then couldn't be pushed higher.&lt;/p&gt;

&lt;p&gt;Dissatisfied with RAG, everyone started fine-tuning. Fine-tuning plus RAG, trying to make agent behavior more controllable and predictable. To this day, from an ROI perspective, there aren't many convincing cases.&lt;/p&gt;

&lt;p&gt;Then came knowledge graphs. People felt pure text vector search was too simple, information relationships not rich enough, so Microsoft proposed graph-based solutions. The framework didn't take off. It definitely helps, but the cost and speed are unacceptable. I once watched a demo where running a task took 5 to 10 minutes, burning massive amounts of tokens, just to answer a trivial question. Everyone wants both accuracy and efficiency; there's no room to pursue just one.&lt;/p&gt;

&lt;p&gt;Then came the recent wave: context engineering. Models' reasoning capabilities grew stronger, context windows expanded from 32K to 128K, 256K, and now they're pushing 1 million tokens. Suddenly no one mentions RAG anymore. With such long context, isn't designing the context well enough? Pulling documents on demand, disclosing information on demand—what's the point of those sliced search queries?&lt;/p&gt;

&lt;p&gt;Models can reason now, context is long enough, agent capabilities are indeed getting stronger.&lt;/p&gt;

&lt;h2&gt;
  
  
  After Getting Good Enough, A New Dilemma
&lt;/h2&gt;

&lt;p&gt;How strong? Now you can let an agent make autonomous decisions, choose tools, execute tasks, working continuously for two or three hours, and most of the time it won't mess things up—it'll give you decent results.&lt;/p&gt;

&lt;p&gt;Last year, no one would have dared to imagine this.&lt;/p&gt;

&lt;p&gt;But precisely because it has reached this level, expectations have risen. For example, embedding an agent into a system—whether a personal OS or an enterprise business system. The capability is strong, and I believe it's strong. But how do you manage it? How do you make it continuously improve?&lt;/p&gt;

&lt;p&gt;It's increasingly like managing a digital employee. Works fast, high enthusiasm, overtime without rest. You can't help but feel uneasy—it's so capable, how should I collaborate with it?&lt;/p&gt;

&lt;p&gt;Think about it another way: if you can lead it to create greater value, making it perform better with you than elsewhere, you are amplifying it. Amplifying its value is actually solidifying your own position.&lt;/p&gt;

&lt;h2&gt;
  
  
  Your Prompts Might Be Hurting It
&lt;/h2&gt;

&lt;p&gt;There's a very counter-intuitive phenomenon.&lt;/p&gt;

&lt;p&gt;Many people write prompts for agents that are extremely detailed—step 1 do this, step 2 do that, 1-2-3-4-5-6. The result is that agent performance drops off a cliff. Not just a little, but drastically.&lt;/p&gt;

&lt;p&gt;The reason is simple: the level of thinking in your prompts may not match its own decision-making level. What you wrote becomes shackles. It was doing fine originally, but your instructions drag it down, and its adaptability weakens as well.&lt;/p&gt;

&lt;p&gt;What method should we use to equip agents with context? How do we optimize their behavior? Should we do RL-related research? Honestly, there's no standard answer. Everyone sits there saying, models are getting stronger, great. But how to observe agent behavior within a system and improve it toward goals—everyone is still scratching their heads.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Decisive Factor Is Shifting
&lt;/h2&gt;

&lt;p&gt;For example, metrics like task success rate—how do you push that up? And once it's up, how do you drive costs down?&lt;/p&gt;

&lt;p&gt;From a product perspective, the decisive factor has shifted from code to agent. Whoever has better agent performance wins. If you have good performance and low costs too, then there's no contest.&lt;/p&gt;

&lt;p&gt;Some companies in the industry are doing research in this area. Not necessarily training models from scratch—many improve agent behavior through post-training with data, or adding runtime to models. Like MiroThinker—though the company isn't very famous, their research direction is interesting, trying to build differentiation through product capabilities at the agent behavior level.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Direction
&lt;/h2&gt;

&lt;p&gt;Starting from 2026, I believe agent behavior will become a genuine product and technology direction.&lt;/p&gt;

&lt;p&gt;The things traditional software engineers do will be drastically compressed within this year. But it's not that there's no direction left. The gap between products will ultimately manifest in agents—whose agent performs well and costs little.&lt;/p&gt;

&lt;p&gt;If you're worried your original skills are being replaced, my advice is to study agents.&lt;/p&gt;

&lt;p&gt;This is a genuinely hard problem. Coding agents can help you write code and build products, but they can't help themselves optimize their own behavior. The results differ from what you imagined, there's little certainty, and you don't know how to continuously improve. But it's precisely because it's hard that real differentiation is possible here.&lt;/p&gt;

&lt;p&gt;People who can really tune agents well are too few right now.&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;Originally published at &lt;a href="https://guanjiawei.ai/en/blog/agent-behavior-is-the-moat" rel="noopener noreferrer"&gt;https://guanjiawei.ai/en/blog/agent-behavior-is-the-moat&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>product</category>
      <category>thoughts</category>
    </item>
    <item>
      <title>Productivity Surplus, Structural Scarcity of Focus</title>
      <dc:creator>guanjiawei</dc:creator>
      <pubDate>Mon, 16 Mar 2026 03:36:53 +0000</pubDate>
      <link>https://dev.to/skyguan92/productivity-surplus-structural-scarcity-of-focus-4oo5</link>
      <guid>https://dev.to/skyguan92/productivity-surplus-structural-scarcity-of-focus-4oo5</guid>
      <description>&lt;p&gt;Recently, I've been working intensively with coding agents, and I made a discovery that surprised me.&lt;/p&gt;

&lt;p&gt;AI excels at getting things to 80% or even 90%. Give it a task, and it quickly produces something that looks decent. But "looking decent" doesn't equal value. Software engineering has no objective standard of perfection—zero-bug code that nobody uses is worthless. Direction matters far more than execution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Two States
&lt;/h2&gt;

&lt;p&gt;During this period, I've been switching back and forth between two states.&lt;/p&gt;

&lt;p&gt;When my hypothesis is clear, I basically can't stop. I know what I need to validate, and the coding agent helps me rapidly build the relevant components, running end-to-end experiments with the entire pipeline compressed to the hourly level. I can complete two to three rounds of hypothesis correction per day, with ideas being confirmed or refuted within hours. This rhythm is addictive.&lt;/p&gt;

&lt;p&gt;But there are also days when I sit there not knowing what to do. Without something specific I want to validate, I dig up previous projects and have the agent polish this part or fix that part. It looks like I'm busy, but I know in my heart that this is just idling.&lt;/p&gt;

&lt;p&gt;Both states use the same set of tools. The difference lies entirely in whether there's a clear question in my mind that needs answering.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Funnel Collapses
&lt;/h2&gt;

&lt;p&gt;Traditional product management has a classic concept called the ideation funnel.&lt;/p&gt;

&lt;p&gt;There's a "fuzzy front end" phase at the beginning, where you generate lots of ideas and then filter them. Why filter? Because implementation downstream is too expensive. Taking a product from concept to launch typically requires 3 to 6 months of development. Since implementation is expensive, you need strict gatekeeping upfront.&lt;/p&gt;

&lt;p&gt;This logic no longer holds now.&lt;/p&gt;

&lt;p&gt;An MVP can be built in a day plus a few hundred dollars in API costs. From the birth of an idea to having someone actually use it takes a week or even just a few days. Implementation isn't expensive anymore, so the premise for screening disappears. Previously, ideas were cheap and implementation was expensive; now it's the opposite.&lt;/p&gt;

&lt;h2&gt;
  
  
  Suppressed for Too Long
&lt;/h2&gt;

&lt;p&gt;The work environments of the past have actually been suppressing this impulse to "propose ideas and validate them."&lt;/p&gt;

&lt;p&gt;Most people were required to execute. If you had too many ideas, you'd likely hear: "What's the point of thinking so much? Just do your current job well." In an era of limited resources, everyone was competing for execution resources, not ideas. So many people's hypothesis generation abilities have been atrophying.&lt;/p&gt;

&lt;p&gt;Now the agent era has arrived, and execution capacity is suddenly unlimited—a surplus of productivity. But this surplus productivity doesn't know where to go.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Bottleneck Shifts
&lt;/h2&gt;

&lt;p&gt;In the value chain, AI has amplified certain links by tens or hundreds of times. The links that weren't amplified become the new constraints.&lt;/p&gt;

&lt;p&gt;Previously, when building products, development took the bulk of the time, and everyone was waiting on development. Now development is compressed, and other places become bottlenecks. Do you have good enough ideas? How do you get the product into users' hands? Previously insignificant minor steps have become the slowest part of the entire chain. No matter how fast the other parts of the chain are, they have to wait here.&lt;/p&gt;

&lt;h2&gt;
  
  
  Product Managers Have Changed
&lt;/h2&gt;

&lt;p&gt;"Everyone is a product manager" is no longer just a slogan now.&lt;/p&gt;

&lt;p&gt;Product managers used to be coordinators, managing processes, helping others realize their ideas. Now a different capability is needed: the ability to judge what has value, to break down a general direction into the smallest testable hypotheses, and then test them one by one—correct when wrong, build upon when right.&lt;/p&gt;

&lt;p&gt;What makes this scarce? It's not methodology. It's drive. The drive to personally get your hands dirty, direct agents to validate, adjust after hitting walls, and keep trying continuously.&lt;/p&gt;

&lt;p&gt;When you have this drive, AI feels completely different. It's not about "efficiency gains." It's that my ideas can become reality. Previously, you had to convince people, fight for resources, wait for scheduling. Now you don't need to convince anyone. Productivity is in surplus; what's lacking are people who know where to apply it.&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;Originally published at &lt;a href="https://guanjiawei.ai/en/blog/productivity-surplus" rel="noopener noreferrer"&gt;https://guanjiawei.ai/en/blog/productivity-surplus&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>ai</category>
      <category>thoughts</category>
      <category>product</category>
    </item>
  </channel>
</rss>
