If you tried to model the AI industry feud between OpenAI and Anthropic using just “safety culture” and “technical capability,” you’d get weird residuals. The Greg Brockman donation — $25M to a pro‑Trump super PAC — is the missing variable that makes the data fit.
TL;DR
- The Greg Brockman donation is documented and huge; it turns “AI safety” from a lab culture debate into a paid political project.
- Dario Amodei’s memo is less a moral outburst than a positioning move that says: we’re losing not on safety, but on not buying influence.
- In modern AI, whitepapers are branding; executive donations are how you actually buy access, contracts, and narrative framing.
Greg Brockman donation: what we can verify
Here’s the compressed setup.
FEC‑tracked filings, reported by Bloomberg and others, show the Greg Brockman donation: Brockman and his wife put about $25 million into MAGA Inc., a leading pro‑Trump super PAC. Around the same time, OpenAI announced a Pentagon deal; Anthropic, which has fought a Pentagon “supply‑chain risk” designation, did not. In an internal memo later leaked and quoted by The Information, TechCrunch, and The Guardian, Anthropic CEO Dario Amodei blasted OpenAI’s Pentagon messaging as “straight up lies” and “safety theater,” and argued that Trump’s team favored OpenAI because “we haven’t donated to Trump (while OpenAI/Greg have donated a lot).”
What we don’t have, from any major outlet: a verified quote of Amodei likening Altman or Musk specifically to Hitler or Stalin. That dictator comparison lives in social posts and Reddit threads, not in the reported memo excerpts.
So: the $25M check is real and documented. The “Hitler/Stalin” meme is not.
That alone should tell you something about which parts of the story we’re incentivized to take seriously.
Amodei's memo: not just drama — a positioning play
If you only read the headlines, the Amodei memo looks like tech beef: “safety guy calls rival evil and mendacious.”
Read the leaked lines as an engineer thinking about competitive strategy and it’s different.
What he actually does in the memo (per The Information/TechCrunch/Guardian summaries):
- Calls OpenAI’s Pentagon messaging “mendacious” and “straight up lies.”
- Calls the deal “safety theater” — i.e., the safeguards are for show.
- Claims the “real reasons” the Trump administration and DoD snub Anthropic are:
- Anthropic didn’t donate to Trump, while “OpenAI/Greg have donated a lot.”
- Anthropic didn’t give “dictator‑style praise” to Trump, while Sam Altman did.
Strip the adjectives and you get a clean structural claim:
We lost because we didn’t pay and flatter, not because our safety work is worse.
That’s not just a moral complaint. That’s a market analysis.
It reframes the whole fight: AI safety doesn’t just compete in research space; it competes in lobbying space. If you’re building a “safety‑first” shop and you ignore that second axis, you get out‑maneuvered by someone willing to run both.
Amodei accidentally writes the spec for how to beat Anthropic with government buyers:
- Match or exceed their technical safety story.
- Outspend them on donations.
- Out‑praise on optics.
He’s not just mad. He’s describing the game board.
Why donations matter more than whitepapers in AI power struggles
If you were modelling “who wins Pentagon AI contracts” like a ranking algorithm, you’d probably start with these features:
- Model capability (benchmarks, evals).
- Safety artifacts (red‑teaming, reports, oversight boards).
- Compliance (clearances, export controls, secure infra).
The Greg Brockman donation forces you to add a fourth feature: political capital spent directly on the decision‑maker.
Once you add that, a bunch of behavior starts to make sense:
- Why OpenAI can pitch itself as both “non‑profitish safety lab” and “preferred provider to a conservative administration” without exploding.
- Why Anthropic produces dense safety research and alignment docs, sues DoD over risk designations, and still finds doors closed.
- Why the “AI safety” conversation sounds like moral philosophy on podcasts and like procurement criteria in DC.
Think of it like ranking cloud vendors in 2012.
You could read every whitepaper on availability zones and S3 durability. Or you could just look at who bought steak dinners for CIOs and funded the right think‑tank reports about “public‑private cloud innovation.”
The whitepapers shape what bureaucrats say they want. The donations shape who they’re allowed to buy from.
In that world, an executive dropping $25M into the ruling party’s super PAC isn’t “personal politics.” It’s a capex decision on a go‑to‑market channel: access to regulation drafts, early looks at procurement pipelines, and the ability to define what “responsible” means in RFP language.
Whitepapers are broadcast. Donations are write access.
How money rewires AI safety, contracts, and trust
Here’s the tradeoff almost nobody in the AI safety discourse likes to say out loud:
- If you don’t buy influence, you keep moral high ground and lose many high‑leverage levers (procurement, export control detail, liability carve‑outs).
- If you do buy influence, you get a seat at the table — and your “safety” commitments become bargaining chips instead of constraints.
OpenAI is explicitly choosing door #2.
You can see the architecture:
-
Narrative layer
- Talk about existential risk, responsible AI, and careful deployment.
- Publish safety‑branded blog posts & technical reports.
-
Influence layer
- Massive executive donations.
- Cultivate personal ties to the administration (“dictator‑style praise” in Amodei’s phrasing).
-
Contract layer
- Pentagon / federal deals framed as “safety‑conscious” because they use the vendor with the loudest safety narrative and strongest political ties.
Anthropic is trying to operate mostly on layer 1 (papers, internal “constitutionalist” positioning) and legal action when punished at layer 3. The memo is Amodei finally yelling about layer 2: we lost the mid‑layer game.
From the public’s point of view, the danger isn’t just “OpenAI has more power.” It’s that the definition of AI safety that wins is the one most compatible with a donor’s political goals.
If your biggest check goes to a pro‑Trump PAC, “safety” will subtly evolve to:
- Accept the surveillance and enforcement tools that administration prefers.
- Worry much less about downstream harms those tools create for its favored base.
- Emphasize threats (e.g., foreign adversaries, domestic extremism) that justify more AI procurement.
It’s not censorship. It’s feature selection.
What this feud really says about the AI game
So did Greg Brockman really cut a $25M check? Yes, and it’s in FEC‑backed reporting.
Did Dario Amodei literally compare Altman and Musk to Hitler and Stalin? There’s no verified evidence of that in major outlets; treat it as internet fan‑fic until someone surfaces the primary text.
The part that actually matters is more boring and more corrosive:
- An AI lab founder looked at a path to AGI and thought “we can sell this to nuclear powers.”
- The same lab’s president paid eight‑figure money into a presidential super PAC.
- A rival lab’s CEO concluded, in writing, that they lost Pentagon trust because they didn’t do either of those things.
That is your industry safety model.
For builders, the practical lesson isn’t “politics bad, research good.” It’s: if your deployment assumptions ignore the influence layer, you’re building a fantasy product. The real spec for “AI alignment” in 2026 includes a line item for FEC filings.
And if you ever wonder which AI narrative will win — open, cautious, militarized, whatever — don’t just read the next safety memo.
Follow the next eight‑figure donation.
Key Takeaways
- The Greg Brockman donation is verified and enormous; it’s better modeled as strategic capex than “personal politics.”
- Amodei’s memo is a complaint about losing the influence game, not just about OpenAI’s morals.
- Executive political donations quietly determine which “AI safety” story gets embedded in law and contracts.
- AI firms making no big donations are choosing weaker levers by design, even if their safety research is stronger.
- To understand future AI power, track FEC data and contract awards at least as closely as benchmark charts.
Further Reading
- Schwarzman, OpenAI’s Brockman Boost $1.02 Billion Trump War Chest (Bloomberg) — Documents Brockman’s $25M gift to a pro‑Trump super PAC and situates it among other large tech donations.
- Read: Anthropic CEO’s Memo Attacking OpenAI’s ‘Mendacious’ Pentagon Announcement (The Information) — Primary reporting and excerpts from Amodei’s internal memo on OpenAI, Trump, and Pentagon “safety theater.”
- Anthropic CEO Dario Amodei Calls OpenAI’s Messaging ‘Straight Up Lies’ (TechCrunch) — Accessible overview of the memo and industry fallout.
- Sam Altman, OpenAI and the Pentagon deal: coverage & context (The Guardian) — Adds context on donations, Trump ties, and Amodei’s “real reasons” argument.
- The Decade‑Long Feud Shaping the Future of AI (Wall Street Journal) — Deep background on the OpenAI–Anthropic split, including Brockman’s AGI‑to‑governments pitch.
Also see our pieces on the Greg Brockman donation in the context of Pentagon posture, and how AI executives use DC access and lobbying trips as a core strategy around the Greg Brockman donation.
Originally published on novaknown.com
Top comments (0)