DEV Community

guanjiawei
guanjiawei

Posted on • Originally published at guanjiawei.ai

AI Has Turned Ignorance into an Advantage

Today, I ran into two things—one that made this era feel a little unreal, and one that made it feel very real.

I. A 23-Year-Old Outsider Cracked a 60-Year-Old Conjecture

Liam Price, 23, with no advanced mathematical training. On a Monday afternoon a week ago, he casually tossed an unsolved Erdős problem to GPT-5.4 Pro. Problem #1196, a conjecture about "primitive sets," had stumped humans for 60 years. In a single session of about 80 minutes, the model produced a proof.

Cambridge undergraduate Kevin Barreto helped organize it. Later, Jared Lichtman and Terence Tao participated in simplifying it, distilling the key insight from the LLM's originally rough output. Terence Tao's comment was rather cutting:

"Humans had looked at this problem. Everyone who looked at it collectively went down the wrong path at step one."

It wasn't that no one had thought about it; it was that everyone who had thought about it made the same mistake. The path the LLM took was common knowledge in an adjacent mathematical field—no one had bothered to bring it over.

Recently, a term has started trending: vibe-math. It means you don't really understand the field; you toss the problem into ChatGPT and see how it fumbles around.

That same week, GPT-5.4 Pro also scored 150 on the Mensa Norway IQ test, surpassing 99.96% of humans. OpenAI's previous high score was o3 at 136. A 14-point jump in one year.

When AI consistently surpasses 95% of human experts, "I don't understand anything, so I'll just try" becomes a structural advantage.

II. Why "Knowing the Trade" Becomes a Constraint

Here's a real feeling from a recent project of mine.

You ask AI to evaluate a project. It lays out a plan: step one, one to two weeks; step two, two to three weeks; totaling three months. It looks reasonable because this is the timeline a human who had done this before would write.

But if you actually let it execute that plan, it finishes in a few days.

When evaluating, it imitates human "priors"; when executing, it's an entirely different creature.

Back to Liam Price. The mathematicians who had looked at Erdős #1196 before him all opened with the same set of moves, because that opening had been considered the "standard approach" to this type of problem for 60 years. That's the prior. Priors used to be good things—shortcuts that saved time. But once the underlying tool changes, the prior binds you instead.

Someone deeply embedded in a field has an intuitive sense of "cost" and "difficulty"—how many people, how much money, how much time. That intuition determines whether they're willing to even think about a problem.

Imagine someone from the Ming dynasty trying to conceive of a moon landing program.

They'd need to coordinate remotely among the top astronomers and rocketeers (a word they've never heard) across several nations, iterating through trial and error. Just thinking about getting a letter to those people and waiting for a reply makes the whole thing inconceivable. It's not that Ming dynasty scholars lacked ability—they gave up thinking about it from day zero.

Your "priors" today work the same way. The ROI is too low on one thing; another requires crossing too many organizational layers; another simply takes too long—so you don't even consider it.

III. Outsiders Don't Carry That Burden

Outsiders don't know what's difficult.

To them, having AI solve a 60-year-old math conjecture and having AI write an automated weekly report or draw a PowerPoint feel roughly the same in subjective difficulty. They can't do either anyway, so both seem hard. So they're willing to try both. It doesn't cost much to try—just burn some tokens. What if it works?

Hit it once, and they've advanced further than someone who spent a lifetime digging in that field.

I previously wrote AI Doesn't Amplify Skill, It Amplifies Passion. That piece argued that passion determines whether you're willing to keep investing. This time I want to press the point further: before you even get to "willing," the question of "do you think this is difficult?" filters out most people. Outsiders bypass that filter.

Recently, Luo Fuli and Zhang Xiaojun talked for 3.5 hours, mainly about the paradigm shift from pre-training to post-training and how organizations should change. A shared feeling emerged from the conversation: the people moving fastest right now are those with less baggage and fewer priors about "I know how hard this is." AI has stuffed resources that used to be mobilizable only by top institutions into the hands of individuals, almost for free. This sense of freedom is rare in history.

IV. Two Strangers in an Elevator

I need to record another thing from today.

Going downstairs for dinner tonight, two strangers were chatting in the elevator. One said, "I'm in a bad mood; I got laid off today." The other replied, "Oh, I got laid off too." From the 9th floor to the 1st floor, just a few dozen seconds. I was still doing the math on those odds as I stepped out.

This is the other side of this era. Some people used AI to cross thresholds they could never have entered before; others had jobs they were doing just fine at vanish overnight.

At the corporate level, two paths are already clear.

The Walmart CEO publicly stated that the company's total headcount will remain basically unchanged over the next three years. The cost is that all 1.6 million employees must undergo AI training, with $1 billion invested in upskilling in partnership with Google and OpenAI. Meaning: no layoffs, but everyone must change.

ByteDance is taking a different approach. Their self-developed AI IDE, TRAE, has long since passed one million monthly active users internally, with over 80% of engineers using it daily. In the Douyin local services line, AI-written code already accounts for 43%. Not 100% yet, but the direction is clear: let AI write what it can write, and free up humans for review and judgment.

The two paths are actually two sides of the same coin: either turn every employee into a "100x worker" who can use AI, or directly cut the links that are no longer needed. Which path you're on determines whether today's news looks like opportunity or threat.

V. The Truth About Polarization

The picture looking back is roughly this.

Niche fields are being cracked open. Mathematical research, biomedicine, materials science, low-level programming—these fields that used to require decades of know-how before anyone dared touch them have suddenly seen their barriers lowered. A 23-year-old cracked a 60-year-old conjecture. Next might be a high schooler doing something non-trivial in medicine or boosting the efficiency of some device. This used to exist only in science fiction.

Mass-market fields no longer need as many people. Weekly reports, PowerPoints, junior-level code, customer service, basic copywriting—AI can do all of it, decently well, and faster and faster. The number of people needed will keep dropping.

The ones who suffer most are those who are already very skilled in mass-market fields but lack the motivation to enter new ones. Their skills are fine; their direction is the problem—with AI in hand, they still only use it to deliver assignments more smoothly.

VI. Add Something New to the World, or Make Your Deliverables Prettier

The response you can make isn't actually complicated. Pick something you gave up on in the past because it "sounded too hard," bring AI in, and do it again.

But which thing you choose makes a difference.

Making the weekly report smoother, the PowerPoint flashier, the existing process slightly more efficient—AI can help with all of that. But these things are still essentially just deliverables. One more pretty weekly report or one fewer makes no difference to humanity. The marginal output of AI on such tasks, when it lands in the outside world, is basically zero.

The things that truly add something to the world are another category: solving a math problem stuck for decades, building a small tool that solves a real problem, poking at a direction you're merely curious about but not credentialed in. Liam Price at 23 cracked a 60-year-old conjecture; the next person to make some device's performance jump 10% or to give an overlooked population a usable tool might very well be an ordinary person.

You don't have to do something that shakes academia; there's a vast middle ground. But you must choose worthwhile things—things worthy of your scarcest resources: your time and attention.

My criteria for whether something is worth doing have changed. I used to ask first: "Can I pull it off?" "Can I deliver it?" "Is there a story to tell?" Now I ask more directly: If this gets done, does the world gain something real?

AI has given every curious person access to what used to be the most expensive resource, almost for free. The most ironic thing is that the vast majority of people, upon receiving this leverage, their first instinct is to use it to deliver assignments more efficiently, not to do something genuinely useful for humanity.

Using this leverage to add something to the world weighs far more than using it to make your deliverables prettier.

Catching this freedom matters more than worrying about the probability of being laid off.


References


Originally published at https://guanjiawei.ai/en/blog/amateur-advantage

Top comments (0)