DEV Community

Cover image for NeoCognition Just Raised $40M to Fix the One Thing Every AI Agent Gets Wrong
Om Shree
Om Shree

Posted on

NeoCognition Just Raised $40M to Fix the One Thing Every AI Agent Gets Wrong

Every AI agent demo looks impressive until you actually depend on one. That 50% task completion rate you've quietly accepted as "normal"? NeoCognition just called it out directly, and raised $40 million to do something about it.

The Problem It's Solving

The foundational critique that NeoCognition is building on is blunt: current agents — whether from Claude Code, OpenClaw, or Perplexity's computer tools — successfully complete tasks as intended only about 50% of the time. That is not a UX problem or a prompt engineering problem. It's a structural one. Today's agents are stateless generalists. They bring no accumulated knowledge of your environment, your workflows, or your domain's specific constraints to each task. Every time you invoke one, it's starting from scratch.

The standard industry response to this has been fine-tuning — custom-engineering an agent for a specific vertical and hoping it holds. That works until the domain shifts, the tooling changes, or you need to deploy the same agent somewhere new. Then you're back to zero.

How NeoCognition Actually Works

NeoCognition was started by Yu Su, Xiang Deng, and Yu Gu, who all worked together in Su's AI agent lab at Ohio State University. Su's team began developing LLM-based agents before the ChatGPT moment, and their research — including Mind2Web and MMMU — is now used by OpenAI, Anthropic, and Google. This is not a product team that pivoted into agents. It's the research behind the agents you're already using, now building something opinionated about what those agents got wrong.

The core thesis is drawn from how humans actually acquire expertise. NeoCognition's agents continuously learn the structure, workflows, and constraints of the environments they operate in, and specialize into domain experts by learning a world model of work. The phrase "world model" is doing significant work here. Rather than applying general reasoning to every task, these agents are designed to build an internal map of a specific micro-environment — its rules, its dependencies, its edge cases — and continuously refine that map through experience.

The Palo Alto startup argues that its agents learn on the job as specialists rather than relying on fixed general training, which is the architectural distinction that matters. Fixed training is a snapshot. A world model grows.

What Enterprises Are Actually Using It For

NeoCognition's primary target is the enterprise market, and specifically the SaaS layer. NeoCognition intends to sell its agent systems primarily to enterprises, including established SaaS companies, which can use them to build agent workers or to enhance existing product offerings. The framing here is interesting: they're not just selling agents to enterprises, they're selling the infrastructure for SaaS companies to make their own products agentic.

The Vista Equity Partners participation is strategic, not just financial. As one of the largest private equity firms in the software space, Vista can provide NeoCognition with direct access to a vast portfolio of companies looking to modernize their products with AI. That's a go-to-market lever, not just a check. You don't close Vista for the cap table optics — you close them because they own the distribution you need.

The deeper implication for enterprises is the safety argument. Deeper understanding of their environments enables NeoCognition's agents to be more responsible and safer actors in high-stake settings. An agent that understands why a workflow exists — not just what the workflow is — is less likely to take a technically correct action that's contextually wrong. That's the difference between a tool and a trusted system.

Why This Is a Bigger Deal Than It Looks

The investor list deserves more attention than most coverage is giving it. Angel investors and founding advisors include Lip-Bu Tan, CEO of Intel, Ion Stoica, co-founder and executive chairman of Databricks, and leading AI researchers like Dawn Song, Ruslan Salakhutdinov, and Luke Zettlemoyer. That last trio — Song, Salakhutdinov, Zettlemoyer — are foundational researchers in modern deep learning and NLP. When researchers of that caliber put their names on a company, they're endorsing the technical thesis, not just the team.

The timing reflects a broader pattern in AI investment in 2026: capital is increasingly flowing not towards frontier model development — dominated by a small number of well-capitalized labs — but towards the infrastructure and agent layer above it. The model wars are effectively over for now. The next real competition is in what those models can reliably do, and that's an infrastructure and learning problem, not a parameter-count problem.

What NeoCognition is proposing — agents that build structured world models of their operating environments — is also the missing architectural primitive for MCP-based agent pipelines. Right now, most agentic systems using MCP are still stateless: each tool call happens in context, but the agent isn't learning the tool ecosystem it operates in. An agent layer that builds persistent, structured knowledge of its environment and the tools available to it would meaningfully change what's achievable in production agentic workflows.

Availability and Access

NeoCognition has just emerged from stealth, so there's no public product available yet. The company currently has about 15 employees, the majority of whom hold PhDs. This is explicitly still a research-to-product transition — the $40M is funding that transition. Enterprise access will likely come through direct partnership channels, given the Vista relationship and the SaaS-first go-to-market. Developers wanting to follow the research can track Su's prior work through his Ohio State lab page.


The 50% reliability ceiling on current agents isn't a model problem — it's a memory and specialization problem. NeoCognition is making a structural bet that the next unlock in agent reliability isn't more parameters; it's agents that actually learn where they're deployed. If they're right, the companies building on today's stateless agent architectures are building on borrowed time.

Follow for more coverage on MCP, agentic AI, and AI infrastructure.

Top comments (0)