DEV Community

Cover image for Something Big Is Happening: a Response
Dimension AI Technologies for Dimension AI Technologies

Posted on • Edited on

Something Big Is Happening: a Response

#ai

A response to a viral essay – agreeing with the urgency, adding precision about what these systems are and what they are not. TL;DR at bottom.

I. Yes, Something Big Is Happening

Matt Shumer's excellent and thought-provoking article "Something Big Is Happening" captures something that many of us who work with AI daily rarely articulate: paid-tier "AI" models in early 2026 are qualitatively different from those of eighteen months ago; and the gap between what they can do and what is commonly thought they can do is wide and still widening.

Coding workflows have been transformed. Legal, financial, analytical and programming tasks that once took hours or even days now take minutes. Anyone who dismissed these tools after a brief encounter with free-tier ChatGPT in 2023 owes it to themselves to look again, because they'll find capability stunningly stronger than what they remember.

The Steam-Powered Robot of 1868

We share Shumer's view that something important is happening and that underestimation of it is widespread. Where we'd like to add some supporting perspective is on what, precisely, is happening — to help clarity and form emerge from the dust-cloud of early-stage change. We suggest that "cognitive automation is scaling rapidly" does not automatically mean "autonomous intelligence is emerging". This may sound like a technicality, grasping it will help shape predictions, policy and career decisions so deeply that it is difficult to identify any period in the past that wrought such fundamental change.

Our view is that this new wave of technology is a profound expansion of cognitive automation — possibly the most significant technology in human history. Understanding what that means matters for anyone making life-decisions: everyone.


II. A Framework For Understanding: Nine Ideas

Own account that augments Shumer's essay by identifying nine developments that, taken together, amount to a genuine transformation in what computers can do and what humans will need to do.

They fall into three groups:

  1. An Interface Revolution: how computers and humans can now talk to each and what they can do with data.
  2. Labour Reallocation: how the relationship between people and software is being restructured.
  3. "The Great Levelling": an economic re-balancing where power has suddenly shifted and what this means for who can build what and where.

Group 1 – The Interface Revolution: What Computers Can Now Do

1. Computers have learned to speak human language. For sixty years, humans had to learn the languages of computers — programming languages, query syntax and command-line interfaces — to make them do useful work. But not everyone can be – or wants to be – a computer scientist. That relationship has now reversed: machines can now receive and produce natural human language and be functionally useful across a vast range of tasks. This sounds simple, but its consequences are enormous: it eliminates the cognitive burden of having to express your thoughts in computer code. This removes the translation layer that has, for over half a century, stood between human intent and machine execution.

2. Unstructured data has become operable. This may be the most consequential development of all, yet it receives surprisingly little public attention. The majority of the world's information is unstructured. It sits in formats that computers can store but cannot meaningfully process: documents, emails, PDFs, meeting transcripts, legal filings, medical records. This enormous corpus of knowledge has never been queryable by machine beyond simple word-matching or the limited tools of traditional Natural Language Processing, which delivered useful results in narrow domains but never achieved general operability over arbitrary text.

Large Language Models have upended this. You can now point a system at a body of unstructured text and ask questions on a pseudo-semantic basis — approximating an understanding of meaning and context, rather than merely matching keywords. The result is imperfect but genuinely transformative: legal discovery, regulatory compliance, financial due diligence and medical literature review are already being reshaped by the fact that machines can now read documents and do something useful with what they find. Much of the real economic value is already accumulating here — in the mundane task of making the world's information accessible to computation for the first time. We do not need true Artificial Intelligence for this to happen and to be enormously valuable.

3. The mechanical work of computing is being automated. Coding, configuration, diagnosis and debugging — the production layer of software development — are increasingly handled by machines. Tasks that would have occupied a developer for a full day now emerge in minutes, with quality that often requires minimal revision. This human time can be reallocated to areas where machines still can't match humans such as creativity and relationships. This sweeping change already extends well beyond the software industry, because code is the substrate of the digital world we all use. Faster, cheaper code production accelerates every domain that depends on software — which in 2026 means all of them.

Interlude: Why This Is Not How Human Intelligence Writes

Human writing is typically goal-directed: you begin a sentence with an intended point, maintain an argument, and aim toward an endpoint you can roughly see. You can revise mid-stream, but the act is guided by a plan, even if only a loose one.

An LLM does not work that way. It does not start a sentence knowing where the sentence will end, or start a paragraph holding an argument in mind and steering toward a conclusion. It generates one token at a time, conditioned on the text so far, with no privileged access to "what it is about to say" beyond whatever patterns are statistically activated by the current context.

This is hugely important, because it explains a recurring phenomenon: outputs that look like coherent argumentation can still drift, contradict themselves, or smuggle in unstated assumptions. The model is not "pursuing" a claim. It is producing locally plausible continuations that often resemble purposeful reasoning because it has learned the surface form of purposeful reasoning from human text.

Group 2 – Labour Reallocation: How This Changes the Relationship Between People and Software

4. The abstraction layer has moved up. Software engineering has a well-known framework called the V-Model of Testing: at the bottom, writing and unit-testing code; moving upward through integration, system design, requirements specification, and acceptance testing.

The V-Model of Testing emerged in the 1980s and 1990s

Historically, most computing professionals spent their time near the bottom, doing mechanical production work. And a lot of developers will say they actively dislike coding: it is cognitively difficult and requires long hours that are generally not compatible with family life. Many technologists find themselves moving away from hands-on coding somewhere in their 30s as a result.

Language models have promoted the entire profession upward. Human work increasingly sits at the upper levels — conceiving, specifying, designing, reviewing, testing, and accepting — while the translation of a clear specification into working software is automated. The human role is shifting from production to direction and judgment: a change in kind, rather than merely degree. It has also extended careers: technologists can continue supervising and reading code well beyond their thirties. And it is even bringing former programmers back into active service, to help with older languages such as COBOL or C++.

5. The barriers to entry have dropped, and the specification gap has narrowed sharply. When the barrier to producing working software drops from "years of training in programming languages" to "the ability to describe what you want in plain English," the pool of people who can participate expands enormously. A domain expert who understands a problem deeply can now build a working tool to address it, without learning Python or hiring a developer as intermediary. Being expert in computer systems architecture or the latest frameworks and libraries shifts from "must-have" to "nice-to-have".

This connects directly to one of the oldest problems in software: the gap between what the customer wants and what the developer builds. They often talk past each other. Requirements are written, discussed, revised, misinterpreted, built wrong, sent back, and revised again — round after round that often ends with something that technically matches the brief but misses the point entirely. It is rather like walking into a pub, ordering a pint of bitter, and having the barman hand you a glass of Chardonnay because, in his professional judgment, that's what you probably need.

This gap is now far cheaper to surface and close, because iteration is cheap. A domain expert can sit down with an AI tool and produce a working example directly: here is what I want, here is how it should behave, here is what it should look like. There is no intermediary to misunderstand, no ping-pong of specification documents that accumulate ambiguity with every round, no techies trying to understand the creatives. The person with the problem can illustrate the solution themselves, in concrete and testable form. The specification gap has always been one of the largest hidden costs in software — responsible for wasted effort, failed projects, and the pervasive frustration of receiving something other than what was asked for. Cheap, rapid prototyping has compressed it dramatically. There is a new risk to be honest about, though: we may be trading a translation gap for a wisdom gap. If the user is unskilled in logic or systems thinking, they may describe a solution that is precisely what they asked for but fundamentally broken in ways a human developer would have caught. The intermediary was sometimes also a sanity check. As the barriers drop, the need for clear thinking about what you actually want — and whether it makes sense — becomes more important, not less.

6. This will change who goes into computing. The shifts described in points 3 through 5, taken together, will attract different people with different skills into technology. The field is likely to become more diverse, more domain-expert-led, more creative, more experimental, and less reliant on traditional computer science training, as people with deep knowledge of law, medicine, finance, logistics, or education find they can build tools for their own domains directly. This long-tailing of talent and ideas is one of the most positive long-term consequences of the current wave of AI development, and another that is so far little discussed.

Group 3 – The Economic Re-Levelling: What This Means More Broadly

7. Enterprise capability is now available to everyone: the Great Levelling. For decades, complex software development — data integration, sophisticated analytics, multi-system workflows, automated testing and CI/CD — was the preserve of large corporations with dedicated IT departments and enterprise budgets. Small businesses could see what was possible, but the cost placed it out of reach. They were locked out of the technology revolution, noses pressed against the window.

That gap is now closing. The same AI tools reducing headcount in large engineering teams are simultaneously putting enterprise-grade capabilities into the hands of small businesses and indie developers, anywhere in the world, at a fraction of the former cost. A two-person consultancy can now build tools that would have required a dedicated team and a six-figure budget three years ago. A sole practitioner in a regional town has access to the same capabilities as a team of fifty in a City of London office. A mid-size manufacturer can implement in two weeks what an external consultant said might take two years.

This is also where an old economic observation may reassert itself: Jevons' Paradox. When the effective cost of a capability collapses, it does not necessarily reduce the amount of that capability used and often increases it. Cheaper computation produced vastly more computation; cheaper bandwidth produced vastly more bandwidth consumption; and they combined to create entirely new categories of activity – such as social media, streaming and today's AI explosion.

There is a related and underappreciated fact about computing more broadly. For over half a century, the price of computation per unit has fallen relentlessly, while the economic value created by computation has risen just as relentlessly. The cost per transistor, per floating-point operation, per gigabyte stored or transmitted has trended toward zero in real terms. Yet total spending on computing infrastructure — hardware, software, cloud services, data centres — has continued to rise.

Far from being a contradiction, instead this reflects a structural property of general-purpose technologies. If a capability becomes cheaper, it does not become less valuable; it becomes embedded in more processes, more industries, and more decisions. Electricity, telecommunications and computing are all following this pattern.

If AI reduces the marginal cost of producing software and analysis, we should not expect a shrinking of the digital economy. We should expect its further expansion. Lower unit prices and higher aggregate value can coexist for decades, provided the capability continues to unlock new domains of use.

If the barriers to coding fall far enough, we should not assume "less software" or "less enterprise IT". We should expect more: more internal tools, more automation, more bespoke workflows, more integrations, and more experimentation inside small firms that previously could not justify any of it. Capability becomes omnipresent – almost ambient – and usage expands to fill it.

We think of this as "The Great Levelling". If giant corporations need fewer developers, the talent freed up flows outward — to smaller firms, new ventures, and parts of the economy that have struggled to keep pace with technology. The transition will feel like a rupture at times, and we should be under no illusions about that. But the longer shift is toward something vastly more positive: a world where the ability to build powerful software is no longer gated by organisational scale or geography. When the cost of producing value drops by orders of magnitude, more of it gets produced and the risk of being wrong all but vanishes.

8. The combined effect is larger than any single prior computing development. Each development above is individually significant. Together, they represent a bigger shift than the personal computer, the database, email, the spreadsheet or the internet, considered individually.

We make this claim for a specific reason: natural language as a programming interface lowers the barrier to entry; unstructured data becoming operable unlocks the majority of the world's information; automated code production compresses delivery timescales; cheap prototyping closes the specification gap; and the levelling of capability between large and small firms redistributes who can build what. No single prior computing innovation did all of these things simultaneously. Acting in concert, these changes are reshaping the economics of knowledge work at a pace and scale that may rival or exceed the impact of the internet to date.

9. This is a revolution. Taken together, these nine developments form a transformation whose full scale we are only beginning to appreciate. Whether it ultimately surpasses the internet in economic impact remains to be seen — but the breadth of what is changing, and the speed at which it is changing, already has no precedent in the history of computing.


III. A Distinction Worth Making: Capability and Intention

There is a distinction that deserves more attention in the current public conversation about AI, because it clarifies a great deal: the difference between what these systems can do and what they are.

What the Architecture Actually Does

At their core, large language models predict the next token in a sequence, and they do this extraordinarily well — well enough to produce outputs that are, across many tasks, indistinguishable from competent human work.

The mechanism is statistical pattern-matching over vast quantities of human-generated text. During training, the model learns associations between language patterns — associations rich enough to capture something resembling understanding of meaning, style, argumentation, and domain knowledge. When the output looks like good legal reasoning, elegant code, or sound medical advice, it is drawing on these learned associations, and the results can be remarkable. But without that training on human-produced knowledge, they are themselves incapable of anything. They merely reflect us back at ourselves, albeit very cleverly.

The Syntax-Semantics Gap: Symbols vs. Reality

It is vital to remember that these models operate entirely within the realm of syntax: pure text. They have no understanding of what the text means. They are extraordinarily sophisticated at manipulating symbols — words, numbers and code — based on their statistical relationships to other symbols. These relationships have been defined by humans, not machines. They possess no "grounded" model of the physical or causal world. They are, in essence, merely shuffling paper.

If you ask an "AI" to describe the trajectory of a falling glass, it is unable to simulate gravity or calculate the structural integrity of the floor. It will merely predict the most likely words a human would use to describe that event. This means they lack what engineers call Causal Reasoning. Because they don't understand why things happen in the real world — only how people talk about them — they are prone to "hallucinations" that are syntactically perfect but physically or logically impossible. They don't even know if they're right or wrong.

How Modern Systems Create the Appearance of Intention

Modern AI systems layer additional capabilities on top of this base. Instruction tuning teaches the model to follow directions. Tool use lets it interact with search engines, code interpreters, and databases. Agent architectures decompose complex instructions into sub-tasks, execute them sequentially, evaluate intermediate results, and iterate. Evaluation feedback from human evaluators shapes the model's behaviour toward outputs that people judge to be helpful and accurate.

The result is behaviour that strongly resembles intentional action. When an agent receives a complex brief, breaks it into steps, executes each step, checks its own work, and revises its approach, the resemblance to purposeful human work is close enough that many might mistake it — understandably — for "judgment" and "taste."

This resemblance is worth acknowledging honestly. These systems behave as if they have goals, and for many practical purposes the distinction between "has a goal" and "behaves as if it has a goal" may seem immaterial. If the output is good, does it matter why? We should concede the functionalist point here: in terms of impact, a system that behaves with perfect agency is indistinguishable from one that possesses it. If an AI agent autonomously completes a week-long project, the economic and social consequences are the same whether or not there is "something it is like" (a "quale") to be that agent. The distinguishing between capability and intention is enormously important for understanding the source of risk and for designing governance — but it does not diminish the scale of disruption.

But three things are worth noting. First, many researchers working on long-horizon planning, robustness and grounded world-models view LLMs as incomplete foundations for AGI, even while acknowledging their economic impact. Second, we do not need true AI for these tools to be extremely useful. Third, performance is uneven: these systems can look astonishing on unfamiliar ground and mediocre on your home turf. People are typically greatly impressed by LLMs' pronouncements on unfamiliar topics, but mightily unimpressed by their output in subjects where the user is expert. We must use these tools carefully, without overstating what they are and without worrying that rapid task compression automatically implies near-term wholesale role deletion. We are still useful.

Where the Direction Comes From

The goals, evaluation criteria, and decisions about what to ask an AI to build next all originate with human researchers, human institutions, and human capital allocation. The system pursues whatever goal it has been given and, as of 2026, that goal itself always comes from outside: from a human, who has a need. LLMs, as of early 2026, are not capable of "self-generating thought" or "self-direction".

A thermostat maintains temperature — without wanting warmth. Evolution produces adaptation — without wanting survival. Gradient descent reduces loss — without wanting improvement. In each case there is directionality, optimisation, and behaviour that looks purposeful. Yet in each case the direction is supplied by the structure of the system, rather than by any internal experience of desire. The system has no awareness of what it is doing. It is entirely mechanical. But — and this is important — a thermostat connected to a global energy grid with a bug in its optimisation function can freeze a city without ever wanting to. Scale and connectivity transform the consequences of mechanical systems, even absent intention. The same applies here.

Current AI systems sit in this same category. They are extraordinarily capable optimisation engines but the optimisation they perform is directed by human choices: human instructions, human-designed reward functions, human-defined objectives. The direction they travel is, in every meaningful sense, set by us. Human needs, human life.

Calling this "lack of intention" does not mean the systems are harmless. It means the risks come from mis-specified objectives, misuse and institutional incentives rather than from spontaneous self-motivated behaviour. The danger is human error and human recklessness, not machine volition.

This Matters for the Future

If these systems were genuinely self-directing — choosing their own objectives and determining their own trajectory — then the future of AI would unfold according to its own logic, and exponential extrapolations about autonomous capability may apply.

But if the trajectory depends on continued human decisions about funding, energy infrastructure, regulation and research direction, then the future is fundamentally a story about human choices, subject to all the familiar constraints that shape every other technology. There is still a human hand on the tiller.

We should be candid, though, about the limits of that reassurance. A hand on the tiller matters less if the vessel is a supertanker that takes five miles to turn. Market pressure, military competition, the sunk-cost logic of trillion-dollar infrastructure investments, and the sheer speed at which AI accelerates its own development cycle all create structural momentum that may prove stronger than any individual act of steering. The intention may be ours, but the current is powerful, and it has its own dynamics. Saying "humans are in control" is true today. Whether it remains practically true if development velocity continues to increase depends on whether the machines do develop the capability for self-direction and self-generated thought.

We think the evidence strongly favours this second view, and we find this encouraging, because it means the outcome is ours to shape through policy, investment, and institutional design — something that we do, rather than something that will simply be done to us.

A Note on "AI Building Itself"

Shumer highlights OpenAI's statement that GPT-5.3 Codex "was instrumental in creating itself," and correctly identifies this as a pivotal moment. We'd offer a supporting insight.

In concrete terms, OpenAI's engineers used the model as a tool during its own development — to debug training runs, manage deployment, and diagnose test results. This is impressive and represents a genuine acceleration of research productivity.

We'd frame it as a sophisticated instance of bootstrapping — a practice that has existed in computing since the 1960s, where a tool is used to build the next version of itself. The C compiler has been compiled by earlier versions of itself for decades. Bootstrapping is recursive and creates a real productivity loop, but what it creates is accelerated human-directed development rather than autonomous self-improvement.

It's also related to the concept of "dogfooding" whereby developers of Windows at Microsoft were made to use the unfinished product they were building, to accelerate the ironing out of kinks. The developers both used and improved their own product.

These ideas are important because "AI creating itself" carries a strong implication of agency: of endogenous desire, decision-making and action. We offer this description to assist precision: human researchers used a powerful AI tool to accelerate the human-directed process of building the next AI system. That is a remarkable productivity story, and worth understanding on those terms. But it is not truly an AI creating itself. It is an AI being used as a tool to assist in creating itself, under human direction and supervision.


IV. Some Cautions About Prediction

We share the view that change is accelerating, and we think the direction of travel is clear. We'd offer some additional considerations that may help with forecasts.

Task Complexity Changes Character as It Grows

The METR data on AI task completion is genuinely impressive: from ten-minute tasks a year ago to five-hour expert-level tasks with recent models, with the doubling time apparently accelerating. This can be extrapolated forward: day-long tasks within a year, week-long within two, month-long projects within three.

We add that five-hour tasks and month-long projects differ qualitatively. A month-long project involves shifting requirements, stakeholder dynamics, ambiguity that can only be resolved through human conversation, political context within organisations and the maintenance of coherent purpose for longer than any current model's working memory (if it has any at all) by orders of magnitude. These are features of the task environment rather than the cognitive difficulty of the task itself, and they may respond to scaling in different and less predictable ways.

This is a reason for care rather than scepticism: progress may continue impressively, but along a different curve than straight-line extrapolation from bounded-task benchmarks suggests.

Institutional Dynamics Will Shape the Pace of Adoption

There is a recurring pattern in technology history: capability arrives suddenly but adoption, implementation and deployment take time, because they depend on institutional readiness. Institutions need to manage change carefully and to avoid discontinuities, preventing change at pace.

Regulatory frameworks for AI in high-stakes domains — medicine, law, finance — are being developed but remain immature. In particular, liability for AI-generated errors is legally unsettled; professional licensing regimes will take time to adapt. Organisational procurement, integration and change management are substantial even when the technology works perfectly. These factors suggest adoption will be uneven — faster in some sectors, much slower in others, and everywhere shaped by realities that operate on their own timescales regardless of how rapidly the models improve.

Task Compression and Role Elimination Are Different Things

Most knowledge-work roles are bundles of tasks. A lawyer's job includes reading documents, drafting arguments, advising clients, negotiating, managing staff, building relationships, exercising judgment in ambiguous situations, and bearing personal legal accountability. AI is becoming very good at some of these tasks — but the bundle includes components that are relationship-based, political and accountability-bearing in ways that do not give way to automation in any way that we can currently envision.

When some tasks are automated, the role changes rather than disappears: headcounts will be redistributed, required skills will shift and the value of remaining human contributions will be recalibrated. But we should acknowledge the tipping-point risk honestly: if AI absorbs enough of the task bundle — say, 70% or 80% of what a junior associate or analyst currently does — the remaining tasks (judgment, relationships, accountability) may not sustain the previous headcount or salary floor. Roles may persist in name while the number of people employed in them falls sharply. The distinction between "transformed" and "eliminated" is real, but it can be cold comfort to the people on the wrong side of a headcount reduction. We should all be prepared: virtually every knowledge-work role will be reshaped over the coming years, unevenly, by industry, organisation and jurisdiction – even if models do not improve any further, and even if we never reach true Artificial General Intelligence. These new tools are already powerful enough to bring vast changes. The picture will be messy and at times the narrative may reveal itself in unexpected ways. But that is how economic transformation has always looked from the inside. Revolutions are nerve-wracking and exhausting – but generally work themselves out.


V. What We Actually Believe

We believe this is the most significant expansion of cognitive automation in modern history and that the process has a long way yet to run. These systems are genuinely useful, increasingly capable and already changing the working reality of millions of knowledge workers. Anyone who has yet to engage with them seriously is leaving value and preparation time on the table – and maybe also themselves. Better to be at the table than on it.

We also believe what people insist on calling "AI" are best understood as powerful optimisation engines attached to human goals — operating within architectures designed by human engineers, trained on human-generated data and deployed according to human decisions. They are tools of extraordinary and growing sophistication and the direction they travel is set by us.

This means the future of AI is fundamentally a story about human choices — about funding, regulation, institutional design and the wisdom with which we integrate these tools into our economies. This perspective gives grounds for agency rather than fatalism because the outcome remains ours to shape.

Shumer's practical advice is largely sound and we echo much of it: learn these tools, experiment regularly, build financial resilience, help your children develop adaptability rather than optimising for career paths that may shift beneath them. We offer this addition to his counsel: cultivate the habit of asking, with precision, what these systems actually are and how they work. The better your mental model of the technology, the better your decisions about when to trust it, when to verify, when to delegate and when to insist on human judgment.

As a concrete operating model: work as if these tools are junior staff with amnesia and no accountability. Delegate to them freely — drafting, exploring, prototyping, triaging — but keep humans on the hook for correctness and responsibility. Verify what they produce. Treat their output as a first pass, not a final answer. (A fair caveat: the "amnesia" part of this metaphor is eroding. Context windows now reach millions of tokens, and retrieval-augmented architectures give models functional memory across long projects. The metaphor still holds for accountability — these systems have no stake in outcomes — but for continuity of context, the gap is narrowing fast. Adjust accordingly.) This is how the most effective practitioners already work with them, and it maps directly onto the capability-without-intention distinction: these systems will do exactly what you point them at, with impressive competence, but they have no stake in whether the result is right. You do.


VI. The New Moat: Competing After the Levelling

If the "Great Levelling" makes enterprise-grade technical capability a commodity, it raises a survival question for the individual and the firm: when everyone has access to the same "super-junior staff," what constitutes a competitive advantage?

When the cost of production trends toward zero, the value shifts entirely to the things that cannot be automated or replicated by statistical pattern-matching. We identify four "New Moats" that will define professional success in the post-levelling era.

1. The Human Premium: Accountability and Trust

In a world flooded with AI-generated content and code, provenance becomes a luxury good. A machine can produce a legal brief, but it cannot go to jail for it. It can suggest a medical diagnosis, but it cannot lose its license or feel the weight of a patient's life.

The new moat is verified human accountability. Clients will pay a premium not for the "work" (which is now cheap), but for the "signature" (which remains expensive). Success will belong to those who cultivate a reputation for being the "Adult in the Room" — the person who takes the risk, provides the guarantee, and stands behind the output.

2. High-Context Relationships and "Political" Intelligence

AI is excellent at solving puzzles but poor at navigating "politics" — the complex, unstated web of human incentives, egos, and history that governs every large organization.

Tacit Knowledge: Knowing what the CEO actually cares about, even when it contradicts the formal brief.

Stakeholder Synthesis: Convincing three different departments with conflicting interests to move in the same direction.
The new moat is the ability to operate in high-ambiguity human environments where the most important data isn't in the database, but in the subtext of a meeting or the "vibe" of a partnership.

3. Curation, Taste, and the "Editor-in-Chief" Mindset

When production is throttled by human effort, "more" is a strategy. When production is infinite and instant, "more" is noise.

As we move from a world of scarcity to a world of glut, the value shifts from the creator to the curator. The new moat is Taste — the ability to look at ten AI-generated variations of a product, a strategy, or a design and know which one will actually resonate with a human audience. We are moving from a world of "Content Creators" to a world of "Content Editors," where the primary skill is the discernment to say "no" to 99% of what the machine produces.

4. Integration and Systems Thinking

The Great Levelling provides everyone with powerful "Lego bricks," but it doesn't provide the manual for the castle.

As AI handles the mechanical tasks (writing the function, drafting the clause), the human role becomes the Architect. The new moat is Systems Thinking: the ability to see how disparate pieces of technology, law, and business logic fit together into a coherent whole. While the AI focuses on the task, the human must focus on the outcome. Those who can bridge the "Wisdom Gap" by understanding how a single automation might break a larger system will be the ones who lead.

Because AI lacks a model of the real world, it is fundamentally incapable of Systems Thinking. It can optimize a single line of code or a specific paragraph, but it cannot foresee the "ripple effects" that a change might have on a complex, interconnected system.

A model can suggest a brilliant tax-optimization strategy (symbolic manipulation), but it cannot intuitively grasp how that strategy might alienate a specific regulator or trigger a sequence of unintended legal consequences (causal reality). The new moat is the ability to bridge this World-Model Gap. Humans must provide the "grounding" — the intuitive understanding of how the digital output will collide with the messy, physical, and political reality of the world. The machine provides the parts; the human provides the Causal Architecture.


VII. Closing

Today's global conversation about AI enjoys no shortage of urgency, but it is largely driven by fear. The greatest fear of all is fear of the unknown; and AI currently holds that label. This can be addressed with more precision. Even if what we have today falls short of true artificial intelligence in the fullest sense, these technologies are real. The changes they are bringing to knowledge work are significant and accelerating, and will continue for the foreseeable future. We need to engage early and thoughtfully, in order to collectively shape outcomes rather than merely react to them.

Understand what these systems are: extraordinary tools for cognitive automation, directed by human purpose. Understand also that they have yet to become systems with genuine autonomy, goals and self-direction. Until that capability arrives, the gap between "here" and "yet to arrive" is where sensible preparation and constructive public conversation need to take place.

The Great Levelling framework

Something big is indeed happening. It deserves precision about what it is and what it is not.

TL;DR

  • Shumer is right: something big is happening.
  • It is the most significant expansion of cognitive automation in history — but not yet the emergence of true artificial intelligence.
  • Large language models do not plan, intend or "know where they are going." They generate one token at a time, producing locally plausible continuations that can resemble reasoning without being goal-directed cognition.
  • These systems have no goals, no intention and no self-direction. They are powerful optimisation engines whose direction is set entirely by human choices.
  • Unless and until that changes, the future remains shaped by funding, regulation, infrastructure and institutional decisions — not machine inevitability.
  • The barriers to building software and analysis have collapsed. Enterprise-grade capability is now available to small firms and sole practitioners.
  • When the cost of a general-purpose capability falls, usage tends to expand rather than contract. Computing's unit price has fallen for decades while its total economic value has risen. AI is likely to follow the same pattern.
  • Expect more software, more automation and more embedded computation — not less.
  • Learn these tools. Use them daily. Treat them as highly capable junior staff with limited memory and no accountability.
  • Keep humans on the hook for correctness, judgment and responsibility.
  • Engage seriously — and be precise about what this is: transformative automation, not yet Artificial General Intelligence.

Top comments (0)