At some point in 2023 or 2024, in a quiet corner of DeepMind’s London office, a group of researchers watched their model lines edge above a benchmark line on a trading backtest chart. In that moment, the DeepMind hedge fund myth basically wrote itself.
Here’s where the story gets strange: those same models, according to Reuters’ reporting, sometimes beat the market in tests… and then the whole thing was shut down and folded back into more respectable AI work.
TL;DR
- DeepMind ran a secretive internal trading project nicknamed “DeepTick” that, per Reuters, sometimes beat the market in tests — but never became a real hedge fund.
- The project died not because the tech failed, but because running a stealth Renaissance Technologies inside Google is a governance, legal, and reputational nightmare.
- The DeepMind hedge fund rumour went viral because it solves a narrative itch: we want neat stories about genius quants and villainous labs, not boring stories about risk committees.
DeepMind hedge fund claim: what Reuters actually found
Reuters did not uncover a hidden Cayman vehicle where Demis Hassabis personally tried to out‑trade Jim Simons.
What they describe instead is an internal DeepMind trading effort — often referred to as “DeepTick” — that:
- Involved dozens of staff over years
- Explored applying DeepMind‑style AI to market data
- Talked to at least one major asset manager (BlackRock)
- Produced models that sometimes beat the market in tests
- Was then quietly wound down
No regulatory filings, no fund launch, no “Hassabis LP”.
So where does “Demis Hassabis secretly built a hedge fund inside DeepMind” come from?
From us. From the internet’s collective habit of turning “internal project” into “shadow institution”, because the former is boring and the latter sounds like a Netflix pitch.
The real question is more interesting than “did the hedge fund exist?”:
Why didn’t it?
Why DeepTick’s results matter: the tech could beat the market (in tests)
The Reuters‑sourced accounts talk about “AI that at times beat the market,” and teams surprised when the project was disbanded. That’s already remarkable.
Most corporate AI prototypes are lucky if they beat Excel.
Beating the market in backtests isn’t the same as printing money in production, but it’s far beyond a toy demo. To get there, DeepMind had to solve a bunch of hard, very adult problems:
- Clean and align multiple noisy data sources
- Pick targets and horizons that aren’t just curve‑fitting
- Build models that can run on realistic latencies and costs
- Survive internal skepticism from colleagues who didn’t join DeepMind to build ad‑tech with Bloomberg feeds
In other words, DeepTick got to the threshold where an independent fund would seriously consider raising capital.
Here’s the key distinction: in finance, that threshold is enough to flip a project from “research” to “regulated activity.”
The moment you can plausibly market “we have an AI signal that historically beats the benchmark,” you’re dancing on the edge of securities law, fiduciary obligations, and a lot of angry counterparties if anything goes wrong.
And inside a public company the size of Alphabet, that’s not a fun edge.
Why Google shut the project down: governance, legal and reputational limits
Imagine being Alphabet’s Chief Risk Officer and someone drops this on your desk:
“We have a semi‑secret team in London, led by one of the world’s most famous AI labs. They’ve built models that can sometimes beat the market. They’re talking to BlackRock. They’d like to keep going.”
Even if all you care about is shareholder value, this is a headache.
You’re not deciding whether the tech works. You’re deciding whether it’s governable.
A non‑exhaustive list of ways an internal DeepMind hedge fund could blow back on Google:
Regulatory reclassification
If you start looking like a bank or hedge fund, regulators may treat you like one. That means capital requirements, disclosure obligations, and scrutiny over conflicts of interest.Conflicts with existing businesses
The same company that sells cloud services, ad targeting, and search could quietly be trading on patterns derived from user behavior. Even if you firewall it, the optics are brutal.Insider information mud puddles
Even if DeepTick only used public data, any perception that Google is trading with an information edge from its broader data firehose would be toxic — and litigated.Reputation risk in an AI‑safety era
Hassabis has spent years cultivating the image of the thoughtful, safety‑minded AI builder. “We used AGI‑adjacent tech to front‑run pension funds” does not play well in Congressional hearings.
So you get the decision DeepMind reportedly took: fold the people and ideas back into “more profound” AI work, and let the trading models quietly die.
This is the pattern that matters. Not “AI can’t beat the market,” but “AI that might beat the market is politically and legally radioactive inside a public tech giant.”
That same logic shows up in other places too — from OpenAI’s cautious dance around AI commercialisation risks to Google’s endless committees on ads fairness.
Inside a big lab, commercial potential above a certain voltage trips the breaker.
How the ‘secret hedge fund’ story spread (and why social posts overreach)
The Reddit thread that kicked all this off has a title like a logline:
“Demis Hassabis secretly built a hedge fund inside DeepMind trying to beat Jim Simons. Google shut it down.”
It’s tidy, villainous, and narratively satisfying.
It also blurs three different layers:
- Verified: DeepMind ran a secretive internal trading project (“DeepTick”) that sometimes beat benchmarks in tests and was shut down.
- Plausible but unproven: People inside may have fantasized about becoming “the next Renaissance.” (Who in quant finance doesn’t?)
- Invented: There was an actual hedge fund run inside DeepMind, with capital and external investors, purposely gunning for Jim Simons, until Google killed it.
Social media flattens these layers into one story because it treats every corporate project as a startup in disguise. If there’s a model and it works, surely there’s a stealth company and a master plan.
But that’s not how big labs operate.
The route from “we have models that backtest well” to “we are now a hedge fund” runs through a maze of committees, sign‑offs, and external approvals that don’t exist at a two‑person crypto shop.
That maze is invisible from Reddit.
From the outside, all you see is:
- Quant talent flowing in (Bridgewater’s former chief scientist joining DeepMind as chief strategy officer, for example).
- A Reuters line about “beating the market.”
- A project quietly shuttered.
The gap gets filled with the cleanest possible story: the geniuses built a hedge fund, it worked, and The Man shut it down.
It’s not that the myth is malicious. It’s just allergic to paperwork.
What this episode reveals about how big AI labs commercialize high‑value work
The DeepTick story is a tiny but sharp lens on something larger: the commercial ceiling on certain kinds of AI inside big labs.
It’s tempting to think the constraint is always technical:
- “If only the models were better, they’d run their own hedge fund.”
- “They must not have really beaten the market.”
But in this case, the more interesting constraint is institutional.
Big labs can comfortably commercialize:
- Productivity tools
- Ads optimizations
- Cloud APIs
- Consumer assistants
They struggle — or outright refuse — to commercialize AI systems that:
- Invite securities or banking regulation
- Create obvious enemies (rival traders, nation‑state agencies, major funds)
- Blur into surveillance or market manipulation
- Could be painted as “AI as weapon” rather than “AI as tool”
That’s why you get a world full of chatbots and code copilots, and not (yet) a world where Google quietly runs one of the world’s most profitable funds.
It also explains why rumours of a DeepMind hedge fund are so persistent. They’re a projection of what seems like the rational endpoint for frontier AI: plug it into the biggest casino there is, and let it print.
The reality is messier. The most dangerous and lucrative AI prototypes get hemmed in by governance and reputational concerns long before they hit their technical limits.
Which, paradoxically, is exactly why the secret hedge fund will always make a better story than the real one.
Because the real story is just a room of researchers, a line on a chart that finally curves up… and then a decision, somewhere much higher up the chain, that says:
No. Not this line. Not here.
Key Takeaways
- There is no verified DeepMind hedge fund — Reuters documents an internal trading project (“DeepTick”) with market‑beating tests, not a launched fund.
- DeepTick’s shutdown is best explained by governance, legal, and reputational risk inside Alphabet, not by purely technical failure.
- Big AI labs have a commercial ceiling: they’re comfortable selling copilots, not comfortable becoming quasi‑banks or hedge funds.
- Social media collapsed “internal trading research” into “secret hedge fund” because paperwork and risk committees don’t fit into viral narratives.
- Future high‑value AI prototypes inside major labs will keep running into the same limit: the more they look like infrastructure for power, the less likely they are to be productized in‑house.
Further Reading
- Google’s top AI executive seeks the profound — over profits and the prosaic (Reuters) — Original investigation detailing DeepMind’s internal trading work and its quiet end.
- Techmeme summary of DeepMind trading work — Curated excerpts from the Reuters piece with key quotes about “DeepTick.”
- Google's Demis Hassabis tasked with turning AI research into profits (CNBC) — Context on the pressure to commercialize DeepMind’s research.
- Bridgewater’s chief scientist Sekhon to join Google’s DeepMind (Reuters via Investing.com) — Example of talent flow from hedge funds into AI labs.
- Reddit thread spreading the hedge fund myth — How the “secret hedge fund” framing propagated online.
In a few years, when someone launches an actual AI‑first fund built by ex‑DeepMind staff, this episode will read differently. Not as proof that the idea failed — but as an early sign that some kinds of AI value simply don’t fit inside public companies at all.
Originally published on novaknown.com
Top comments (0)