Jack Clark predicts AI that self-improves by 2028. No lab has demonstrated this; gap between current AutoML and vision is vast.
Anthropic co-founder Jack Clark predicts by end of 2028, an AI system could autonomously 'make a better version of yourself' when asked. The claim, shared on X, targets a milestone in recursive self-improvement that no lab has yet demonstrated.
Key facts
- Jack Clark is Anthropic co-founder and former head of policy.
- Prediction: 'more likely than not' by end of 2028.
- System would autonomously 'make a better version of yourself'.
- No lab has demonstrated recursive self-improvement in AI.
- Current AutoML narrow; Clark's vision requires general agency.
Anthropic co-founder Jack Clark posted a prediction on X: 'by the end of 2028, it is more likely than not that we will have an AI system where you could say to it: "Make a better version of yourself." And it would simply go off and do that completely autonomously.' [According to @kimmonismus] The post links to a broader discussion about AI timelines but provides no technical detail on how such a system would work.
The unique take: Clark's prediction is notable not for its novelty—the idea of recursive self-improvement dates back to I.J. Good's 1965 'intelligence explosion'—but for the specificity of the timeline and the fact that it comes from a co-founder of a leading safety-focused lab. Anthropic has positioned itself as the cautious counterweight to OpenAI's rapid deployment; hearing a 2028 timeline from a senior Anthropic figure suggests internal confidence that safety research will keep pace with capability advances, or that the company believes the risk is manageable enough to permit such a system.
Clark did not specify the compute requirements, dataset sizes, or architectural changes needed to reach this milestone. No leading AI lab has publicly demonstrated an agent that can improve its own weights without human intervention. Current state-of-the-art in automated ML (AutoML) can search architectures or tune hyperparameters but does not produce a generally capable agent that then improves itself recursively. [According to prior arXiv work on AutoML] The gap between today's narrow self-tuning and Clark's vision of a fully autonomous self-improving system is vast and unbridged by any published research.
The claim sits at the intersection of two unresolved debates: whether recursive self-improvement is even feasible, and if so, at what capability threshold it becomes possible. Clark's 2028 date implies a belief that the necessary capabilities—general reasoning, long-horizon planning, robust self-evaluation—will emerge within three years. That is a significantly shorter timeline than many AI safety researchers have publicly estimated. [According to prior statements by AI alignment researchers]
Key Takeaways
- Jack Clark predicts AI that self-improves by 2028.
- No lab has demonstrated this; gap between current AutoML and vision is vast.
What to watch
Watch for any lab to publish an agent that can tune hyperparameters or search architectures autonomously—the first concrete step toward Clark's vision. Also track Anthropic's safety research output: if the company accelerates deployment timelines, it signals confidence in alignment solutions. Clark's next public talk or interview may clarify the reasoning behind the 2028 date.
Originally published on gentic.news

Top comments (0)