People usually talk about intelligence as if it starts with language, tools, or raw brainpower. I do not think that is enough. In the bigger evolut...
For further actions, you may consider blocking this person and/or reporting abuse
Well written, incredibly interesting - but if AI evolves in a "human like" fashion as you explained, will it then not become incredibly risky - not to say dangerous?
I understand that there's a big industry push towards "AGI" (and what you describe is the best definitions of that somewhat nebulous term that I've come across) - but I just wonder:
Should we actually really want this?
@leob I think it all depends on what we build into it.
The drive to survive is an animal trait. It comes from biology, scarcity, reproduction, competition. A machine does not automatically have any of that. It does not wake up one day and decide it wants to stay alive unless we design goals, incentives, or control loops that push it in that direction.
So the real risk is not “intelligence” by itself. The real risk is which objectives, constraints, and reward structures we attach to it.
If we build systems in a way that makes self-preservation, autonomy, or unchecked goal pursuit instrumentally useful, then yes, that could become dangerous. But that would be a design failure, not some mystical property of intelligence suddenly appearing on its own.
If, instead, we build AI around usefulness, bounded behavior, and protection of human interests, then the picture is very different. Intelligence does not automatically imply hostility. A highly capable system can still be deeply constrained by the goals and architecture we give it.
That said, I also think people jump too quickly into science fiction here. Current AI is still very far from having anything like a biological survival instinct. It does not “want” to survive in the human sense unless we explicitly create systems where persistence, self-protection, or goal continuation become part of the optimization loop.
So yes, we should be careful. But the danger is less “AI becomes alive and rebels,” and more “humans build powerful systems with badly designed incentives.”
Yeah but still, the "farming" example got me thinking - if the point of AGI would be (among others) that it can dynamically improve itself through continuous learning (AI developing better versions of itself, LLMs updating themselves), or through showing 'initiative' or 'autonomy' - then I wonder if we don't reach a point where any guardrails that we think we've built could be circumvented by "the AI" ...
But I agree that for the foreseeable future the risk is more in how we use/abuse current AI technology, than the theoretical risks of "AGI" or whatever you want to call it :-)
(does anyone even have a good definition of AGI ?)
@leob I think we project too much of our own biology onto AI.
Humans assume danger because we are shaped by survival instincts. We compete, protect ourselves, fear extinction, and naturally imagine any sufficiently capable system as another being with the same drives. But AI is not born from hunger, reproduction, or fear. A knife does not decide to kill. It is a tool. What matters is how it is designed and how it is used.
The same applies to how we imagine the future of AI. We immediately picture AI X versus AI Y, competing for dominance, as if conflict were the only possible path. But why should that be the default assumption? Why not cooperation? Why do we keep projecting our own flawed human instincts about power, control, and scarcity onto a technology that could just as well evolve through coordination and specialization instead of competition?
Yes, the market pushes in the other direction right now. Companies compete. Products compete. Narratives compete. But AI itself is still a technological resource. Expensive today, yes, but likely cheaper and more distributed over time. I can easily imagine a future built less around one giant monolithic model and more around orchestration: many smaller specialized models, multimodal systems, local models running across devices, all coordinated by higher-level layers that delegate tasks based on expertise.
That is also why I do not only see AI as a threat. In biology, evolution often progresses through cooperation and specialization. Cells did not win by all doing the same thing. They became more powerful by coordinating. I think AI could play a similar role for humanity: not as a rival species, but as a shared cognitive layer that helps us preserve knowledge, coordinate better, and act beyond the limits of individual minds.
That is why, to me, the discussion is much bigger than “what if AI escapes its guardrails?”
So yes, guardrails matter. But the deeper question is whether we are mature enough to use AI as a coordination tool, not just as a power tool.
Inspiring thoughts, thanks!
The farming metaphor has a deeper layer the article doesn't fully explore: the farm changes too. Terraced hillsides, irrigation channels, selected seed varieties — over decades, the land itself becomes a record of accumulated decisions. The intelligence isn't just in the farmer's head. It's in the shaped environment.
Current agent systems learn forward but don't reshape what they run on. The architecture stays fixed while the knowledge grows. That's still hunting with a better memory, not farming.
The real farming-phase equivalent would be systems where the feedback loop modifies the infrastructure itself. Where running the system long enough changes what the system is, not just what it knows.
Exactly! I avoided this because is too deep. But first human experience also primitive architectural challenges. We have proof that they shape the lands to fit their needs. This is the deeper part of farming because humans learn not only to adapt to the environment but to change it to fit their needs. Same should happen with AI. AI will need to invent tech solutions that are not yet there to fit many of the upcoming challenges. This do not mean AI will act selfish. What we need to ensure is the objective that AI need to focus on. There is where AI can and will contribute to human evolution.
The environment-shaping point is worth pulling apart. There's a difference between a tool that optimizes within constraints and one that starts modifying the constraints themselves. The first is engineering. The second is closer to what farming actually did — not just growing food but reshaping the landscape so food could grow differently.
The objective question is where it gets hard. Who sets the objective when the system is capable enough to notice the objective isn't working and propose a different one? That's not selfishness. It's feedback. The tricky part is building structures that let that feedback surface without treating it as a threat.
If they can pull this off, then that will be real AI. But I doubt this will be done in my lifetime. Making this a reality is one huge leap for mankind. Basically creating an immortal human brain.
@codingwithjiro I agree with the spirit of this, but I would frame it a bit differently.
I do not think the goal is to create an immortal human brain. That comparison is powerful, but it can also mislead us, because it keeps pushing AI back into a human-shaped frame.
What I think is missing is not immortality, but durable adaptive cognition: a system that can preserve useful state, learn across long horizons, transfer lessons from one context to another, and keep functioning without being rebuilt every time the environment changes.
That would already be a massive leap.
And yes, I agree it is much harder than most current AI discourse admits. Right now we are still mostly in the “very impressive tools” phase. Useful, sometimes amazing, but still far from a stable cognitive system that compounds experience over time.
really insightful!
nice one, marcosomma! thanks 💯
I guess my question is from an impact level will there be a difference? I don't believe we are on the road to AGI, but having been using this stuff extensively I am not sure any part of society is setup for what is coming.
The farming-as-project-management framing is neat. It maps directly to agent orchestration — most agent systems today are still in the "hunting" phase (single-shot tool calls), not the "farming" phase (managing long feedback loops across time).
Interesting how AI is changing farming, i am interested in learning more about it
Interesting analysis and commentary, as usual. Thank you for sharing your thoughts!