<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: 1p</title>
    <description>The latest articles on DEV Community by 1p (@onepizzateam).</description>
    <link>https://dev.to/onepizzateam</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/onepizzateam"/>
    <language>en</language>
    <item>
      <title>Yann LeCun thinks the whole industry is building the wrong thing, and now he has $1B to prove it</title>
      <dc:creator>1p</dc:creator>
      <pubDate>Wed, 29 Apr 2026 18:28:10 +0000</pubDate>
      <link>https://dev.to/onepizzateam/yann-lecun-thinks-the-whole-industry-is-building-the-wrong-thing-and-now-he-has-1b-to-prove-it-2f9c</link>
      <guid>https://dev.to/onepizzateam/yann-lecun-thinks-the-whole-industry-is-building-the-wrong-thing-and-now-he-has-1b-to-prove-it-2f9c</guid>
      <description>&lt;p&gt;LeCun left Meta, started AMI Labs, and is betting world models beat LLMs for real AI. Here's what that actually means, what the research shows, and why it matters for where AI tooling goes next.&lt;/p&gt;




&lt;p&gt;Quick context if you haven't been following: Yann LeCun is one of the three "godfathers of deep learning" (the Turing Award crew alongside Hinton and Bengio), spent 12 years running Meta's AI research lab FAIR, and has been publicly, loudly skeptical of LLMs basically the entire time they became the dominant paradigm. Think of him as the guy in your Discord who keeps saying "yeah but have you actually read the architecture paper" -- except he's usually right, and now he's raised a billion dollars.&lt;/p&gt;

&lt;p&gt;In November 2025 he left Meta. By March 2026, his new lab &lt;strong&gt;AMI Labs&lt;/strong&gt; (pronounced &lt;em&gt;ah-mee&lt;/em&gt;, French for friend, cute) closed a &lt;strong&gt;$1.03B seed round at a $3.5B valuation&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;Largest seed round in European startup history. &lt;/p&gt;

&lt;p&gt;Backers include Bezos Expeditions, NVIDIA, Samsung, Toyota Ventures, and Tim Berners-Lee personally. That's not a hype round. That's serious people making a serious bet.&lt;/p&gt;

&lt;p&gt;The bet being: &lt;strong&gt;world models&lt;/strong&gt; are the actual path to useful AI, and LLMs are a dead end for anything involving the physical world.&lt;/p&gt;

&lt;p&gt;Let me break that down for you.&lt;/p&gt;




&lt;h2&gt;
  
  
  The problem with autocomplete at scale
&lt;/h2&gt;

&lt;p&gt;LLMs do one thing: predict what token comes next, over and over, trained on enough text that the predictions become eerily good. That's genuinely impressive engineering. But there's a structural ceiling.&lt;/p&gt;

&lt;p&gt;Here's a concrete way to see it. If you ask GPT-anything to help you write a Rust CLI tool, it does pretty well. Ask it to debug a memory layout issue where the problem only shows up under a specific CPU cache behavior and it starts hallucinating plausible-sounding nonsense. Not because it's dumb, but because it never &lt;em&gt;learned&lt;/em&gt; the underlying model of how memory and CPUs actually interact. It learned the language people use to &lt;em&gt;talk about&lt;/em&gt; those things. Different thing.&lt;/p&gt;

&lt;p&gt;LeCun's framing: LLMs are trained on "the dried crust of human knowledge", text written after the thinking was done. They don't have access to the reasoning process, the failed experiments, the physical intuition that produced that text. They get the output, not the computation.&lt;/p&gt;

&lt;p&gt;The symptoms we all know: hallucinations, no real planning, zero common sense about physical cause and effect. A model that can write a paper about gravity can't predict that a ball will fall if you drop it, not from first principles anyway.&lt;/p&gt;




&lt;h2&gt;
  
  
  What's a world model, actually
&lt;/h2&gt;

&lt;p&gt;The term gets thrown around loosely so let me be precise about what LeCun means.&lt;/p&gt;

&lt;p&gt;A world model is an internal simulation an agent builds of how its environment behaves, not just pattern-matching on surface features, but learning the &lt;em&gt;rules&lt;/em&gt; that generate those patterns. Babies do this before they can talk. You've got a world model running right now: you know a coffee cup will fall if it's too close to the table edge, you know roughly how far you can lean a chair before it tips, you know that if someone looks over your shoulder they can read your screen. None of that came from reading text.&lt;/p&gt;

&lt;p&gt;The goal is AI that builds that same kind of model from observation — watching video, interacting with environments — and can then reason forward from it. "If I do X, Y will probably happen, and that means Z becomes possible."&lt;/p&gt;

&lt;p&gt;This is a pretty different problem from next-token prediction.&lt;/p&gt;




&lt;h2&gt;
  
  
  The architecture: JEPA
&lt;/h2&gt;

&lt;p&gt;LeCun's technical answer is called &lt;strong&gt;JEPA&lt;/strong&gt; (Joint Embedding Predictive Architecture), which he first proposed in a 2022 paper while still at Meta.&lt;/p&gt;

&lt;p&gt;The core idea is this: instead of predicting the raw pixels of what a future video frame will look like (which is nearly impossible, too much irrelevant detail), JEPA learns an &lt;em&gt;abstract representation&lt;/em&gt; of what's happening and makes predictions in that space.&lt;/p&gt;

&lt;p&gt;Imagine watching someone reach toward a coffee mug. You don't mentally render every photon bounce in 4K. You just know: "they're picking that up." JEPA learns that level of abstraction, ignoring unpredictable low-level noise, keeping the meaningful structure.&lt;/p&gt;

&lt;p&gt;The technical term is it operates in &lt;em&gt;latent space&lt;/em&gt; rather than &lt;em&gt;pixel space&lt;/em&gt; or &lt;em&gt;token space&lt;/em&gt;. You're predicting compressed representations of reality, not reality itself.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;And it's not generative.&lt;/strong&gt; It's not trying to generate the next frame. It's learning the underlying dynamics, more like a physics engine than a video renderer.&lt;/p&gt;




&lt;h2&gt;
  
  
  Results that are already out
&lt;/h2&gt;

&lt;p&gt;This isn't pure theory waiting on a 10-year timeline. There's published research.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;V-JEPA&lt;/strong&gt; (the video version, released by Meta's FAIR team) was trained on internet video and showed solid performance on motion understanding tasks. Then came &lt;strong&gt;V-JEPA 2&lt;/strong&gt; in June 2025, a 1.2B parameter model trained on over a million hours of video. The wild part: it was fine-tuned on just ~62 hours of real robot interaction data and could do zero-shot robotic planning, outperforming Nvidia's Cosmos by up to 30x in speed.&lt;/p&gt;

&lt;p&gt;Zero-shot. Meaning the robot had never seen those specific objects or environments during training. It generalized from its world model.&lt;/p&gt;

&lt;p&gt;On the Something-Something v2 benchmark for motion understanding it hit 77.3% top-1 accuracy, and on Epic-Kitchens-100 for human action anticipation it reached 39.7 recall-at-5, beating previous task-specific models. These are hard benchmarks. Task-specific models train specifically for these tasks and still got beaten by a general world model.&lt;/p&gt;

&lt;p&gt;Then &lt;strong&gt;VL-JEPA&lt;/strong&gt; (vision-language, late 2025), with just 1.6B parameters, matched or exceeded larger generative VLMs like InstructBLIP and QwenVL on benchmarks like GQA and POPE, using 50% fewer trainable parameters.&lt;/p&gt;

&lt;p&gt;Half the parameters. Same or better results. That's not an incremental improvement. That's a signal the architecture is doing something smarter, not just brute-forcing scale.&lt;/p&gt;




&lt;h2&gt;
  
  
  What happened when LeCun left Meta
&lt;/h2&gt;

&lt;p&gt;The split from Meta is interesting because it's not dramatic, no blowup, no public drama. LeCun told MIT Tech Review he "kind of hated being a director" and that he disagreed with some of Zuckerberg's calls (letting the robotics group go at FAIR was his specific example). Meta doubled down on LLMs and scaling Llama. LeCun thought that was the wrong mountain.&lt;/p&gt;

&lt;p&gt;So he left and started AMI in Paris. Pronounced it's also the abbreviation for Advanced Machine Intelligence — the exact research program he was running at FAIR. He's just continuing it without the corporate overhead. :)&lt;/p&gt;

&lt;p&gt;The funding round brought in some interesting names beyond the usual VC suspects: co-led by Cathay Innovation, Greycroft, Hiro Capital, HV Capital, and Bezos Expeditions, with individuals including Tim and Rosemary Berners-Lee, Jim Breyer, Mark Cuban, and Eric Schmidt. Also NVIDIA and Samsung on the strategic side. These are people who understand what a long-bet fundamental research play looks like.&lt;/p&gt;

&lt;p&gt;First disclosed partner is Nabla, a healthcare AI company, specifically because hallucinations in medical AI are a genuine patient safety problem, and world models are being explored as the fix.&lt;/p&gt;




&lt;h2&gt;
  
  
  How the community actually reacted
&lt;/h2&gt;

&lt;p&gt;Honestly, split. And that's kind of healthy.&lt;/p&gt;

&lt;p&gt;On the skeptical side: Elon Musk posted that LeCun "thinks if he can't do it, no one can." Figure's Brett Adcock told him to "get his hands dirty" (Figure makes humanoid robots using end-to-end learned approaches LeCun thinks are fundamentally limited). Some Hacker News comments were blunt, one called the whole wave "science experiments rewarded with VC money."&lt;/p&gt;

&lt;p&gt;LeCun's reply to Musk was basically: "I know I can do it and I know how to do it. Just not with the techniques everyone is currently betting on."&lt;/p&gt;

&lt;p&gt;On the believer side: Goldman Sachs published a report calling the world model "the missing link" in AI, arguing that solving it represents the next decisive leap in artificial intelligence. Fei-Fei Li launched World Labs around spatial intelligence (closely related). DeepMind's Demis Hassabis has said he thinks language is limited for robotics and is working on world models through Genie and SIMA. Even if the labs won't say "LeCun was right," they're quietly working on the same problem.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why this is relevant if you build devtools or systems software
&lt;/h2&gt;

&lt;p&gt;OK so here's where I'd normally make the leap to "this changes everything for developers" — but let me be more specific than that.&lt;/p&gt;

&lt;p&gt;If you're building CLI tools, systems software, Rust stuff, anything in the devtools space: the near-term impact of world models is mostly in what &lt;em&gt;AI assistants for developers&lt;/em&gt; can eventually become.&lt;/p&gt;

&lt;p&gt;Right now, an AI coding assistant is essentially autocomplete plus a very large lookup table. It works surprisingly well because a lot of coding is pattern-matching. But the failure modes are specific: it doesn't model your &lt;em&gt;system&lt;/em&gt; — your runtime, your memory layout, your dependency graph behavior under load. It models the syntax of talking about those things.&lt;/p&gt;

&lt;p&gt;A world model-based assistant could potentially build an actual simulation of your codebase — understand that this function causes that behavior, that this allocation pattern leads to this cache behavior, that this interface contract breaks under these conditions. Not by having read a million Stack Overflow answers about it. By actually modeling the system.&lt;/p&gt;

&lt;p&gt;That's still a few years out from AMI. But it's the direction.&lt;/p&gt;

&lt;p&gt;More concretely right now: the robotics and industrial automation track is moving fast. AMI Labs is targeting healthcare, robotics, wearables, and industrial automation as its first commercial applications. World models for physical systems — factory automation, autonomous vehicles, drones — is where the early deployment is happening. V-JEPA 2 doing zero-shot robot planning is the proof of concept.&lt;/p&gt;




&lt;h2&gt;
  
  
  One honest caveat
&lt;/h2&gt;

&lt;p&gt;AMI's CEO was direct about this: it's "not your typical applied AI startup that can release a product in three months." This is long-horizon fundamental research. Think years, not quarters.&lt;/p&gt;

&lt;p&gt;LeCun himself said it plainly: we're going to get AI systems with human-level intelligence, but not built on LLMs, and "not next year or two years from now." There are conceptual breakthroughs still needed.&lt;/p&gt;

&lt;p&gt;So this isn't "LLMs are dead, pivot your stack." LLMs are the best general-purpose AI tool available today. But the research direction is clearly shifting, and the architecture questions being asked now will shape what AI-powered devtools look like in 5 years.&lt;/p&gt;

&lt;p&gt;It's worth understanding what JEPA is before it's everywhere. Kind of like understanding attention mechanisms before transformers became unavoidable.&lt;/p&gt;




&lt;h2&gt;
  
  
  Where to go deeper
&lt;/h2&gt;

&lt;p&gt;If you want to actually read the work rather than just follow the funding drama:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;LeCun's 2022 position paper: &lt;em&gt;"A Path Towards Autonomous Machine Intelligence"&lt;/em&gt; (arxiv) — this is the foundational thing&lt;/li&gt;
&lt;li&gt;V-JEPA 2 paper (arXiv:2506.09985) — concrete results on physical reasoning&lt;/li&gt;
&lt;li&gt;VL-JEPA paper (arXiv:2512.10942) — the vision-language results&lt;/li&gt;
&lt;li&gt;AMI Labs site: amilabs.xyz — pretty sparse still but confirms the research direction&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;Are you watching the world model space, or does it feel too far out to care about right now? Curious what people building real systems software think about where AI tooling is headed — drop a comment, always looking to explore new perspectives.&lt;/p&gt;

&lt;p&gt;And if you're already playing with JEPA or any of the open-source research outputs, I'd genuinely love to know what you've built with it.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>deeplearning</category>
      <category>discuss</category>
    </item>
  </channel>
</rss>
