The last article on this site went out February 25th. Twelve days later, it feels like a different industry. OpenAI raised $110 billion. The US government started kicking Anthropic out of federal contracts. GPT-5.4 shipped with a million-token context window. Block fired 40% of its workforce. Karpathy open-sourced an autonomous research agent. A lot happened. Here is all of it.
The $110 Billion Question
On February 27th, OpenAI closed what is now the largest private funding round in history: $110 billion, led by Amazon and Nvidia, with SoftBank also participating. The pre-money valuation came in at $730 billion.
To put that number in context: OpenAI is now valued higher than most Fortune 50 companies. The round expands its AWS distribution capacity and deepens its Nvidia infrastructure pipeline, which matters a lot given the compute constraints every frontier lab is bumping against.
The same week, self-driving startup Wayve raised $1.2 billion in Series D funding at an $8.6 billion valuation. Nvidia, Uber, and a group of automakers backed the round, with Uber retaining an option to invest up to $300 million more. Wayve is targeting public robotaxi trials in London in 2026. Autonomous vehicles are not dead — they just needed more capital and more patience than anyone expected.
The Pentagon Deal and the Anthropic Fallout
This is the story of the fortnight. On February 28th, OpenAI signed a deal with the US Pentagon to provide AI models for classified, cloud-only deployment. The deal came with three explicitly stated redlines — things OpenAI would not do — but the company signed anyway.
Anthropic did not. After refusing to allow Claude to be used for fully autonomous weapons targeting and mass domestic surveillance, the Trump administration designated Anthropic a "supply-chain risk" and ordered a federal phase-out of its products across all agencies over six months.
The fallout was swift. The State Department shut down its Anthropic contract and migrated its internal chatbot (StateChat) to OpenAI's GPT-4.1. Treasury Secretary Scott Bessent announced that Treasury was ending all Anthropic usage. The Department of Health and Human Services directed employees to switch to ChatGPT or Gemini. Around 100 engineers who had been using Claude for coding assistance moved to OpenAI Codex and Google Gemini. Fannie Mae, Freddie Mac, and the Federal Housing Finance Agency followed suit.
OpenAI moved quickly to fill the gap. Not everyone inside the company was comfortable with that. On March 7th, Caitlin Kalinowski, OpenAI's robotics and hardware lead, resigned specifically in response to the Pentagon partnership. She was not the only one with concerns, but she was the most senior departure to go public with a reason.
Anthropic's position is that it will not remove safety constraints to win government contracts. That is an expensive stance in the short term. Whether it proves to be a long-term advantage depends on how the broader political environment evolves and whether enterprise buyers outside government care about the distinction.
GPT-5.4: The One With a Million-Token Context
On March 5th and 6th, OpenAI released GPT-5.4 and GPT-5.4 Pro across ChatGPT, the API, and Codex. This is the most capable model they have shipped.
The headline number is the context window. GPT-5.4 supports up to one million tokens in the API. That is a genuine category change, not a marketing increment. The previous practical ceiling was 128K to 200K tokens depending on the version. At one million tokens you can feed in entire codebases, large document sets, or months of conversation history in a single request.
Beyond context, the model natively operates computers and software to complete tasks across applications. Computer use is no longer a bolt-on tool call — it is a core capability. You describe a goal, the model determines which applications to open and what to do in them.
The accuracy numbers are also meaningful. GPT-5.4 makes 33% fewer factual errors than GPT-5.2 and uses fewer tokens to solve problems, which translates directly to faster and cheaper inference. On benchmarks, it scored 83% on GDPval (knowledge-work tasks) and set new records on OSWorld-Verified and WebArena Verified, which measure real software operation.
Two variants: GPT-5.4 Thinking for deep reasoning tasks, GPT-5.4 Pro for high-performance production workloads. Neither is on the free tier. Enterprise and API access was live at launch; consumer tiers are rolling out this week.
Cursor Goes Event-Driven
Cursor launched Automations during the same week. The core shift: instead of AI agents being triggered by prompts, they are now triggered by events. A file changes, a test fails, a PR gets opened — the agent responds automatically without waiting for a human to say "go."
This is a meaningful step toward ambient coding assistants that operate in the background and surface results rather than waiting for instructions. Combined with a million-token context window on the model side, the direction of travel is clear: AI is becoming infrastructure, not a tool you pick up and put down.
The Layoffs Are Here
February 26th: Block announced it is cutting more than 4,000 employees, roughly 40% of its total workforce. CEO Jack Dorsey was direct about the reason — AI tools now enable smaller teams to do more. Block is not unique in making this calculation. It is just one of the first large companies to restructure this aggressively and say the quiet part out loud.
March 2nd to 8th: Oracle revealed plans to cut up to 30,000 jobs to fund its AI data center buildout. Oracle is betting that the infrastructure layer of AI is where the durable value will be captured, and it is restructuring its workforce accordingly.
IBM's 2026 X-Force Threat Intelligence Index, released February 25th, added a different data point: application exploits rose 44% year over year, with North America accounting for 29% of all cases. The number of active extortion groups hit 109 in 2025. AI is not just changing how companies operate — it is changing how they get attacked.
Open Source Had a Strong Week Too
Alibaba's Qwen team released Qwen3.5-4B and Qwen3.5-9B, both under Apache 2.0 licenses on Hugging Face and ModelScope. Qwen3.5-4B is positioned as a strong multimodal base for lightweight agents with a 262K token context window. Qwen3.5-9B is a compact reasoning model that Alibaba claims outperforms OpenAI's open-source gpt-oss-120B on key third-party benchmarks. Open-weight models continue to close the gap with frontier closed models.
On March 9th, Andrej Karpathy open-sourced AutoResearch, a project for running AI-driven research loops on a small single-GPU setup. The agent modifies code and guidance files, runs short training experiments, evaluates results, and iteratively keeps improvements overnight — autonomously. It is a glimpse of what research pipelines look like when you hand them to agents.
Anthropic's Compute Moat
A detailed analysis published March 6th made the case that Anthropic has quietly built the most diversified and cost-efficient compute architecture among frontier labs. OpenAI remains almost entirely dependent on Nvidia. Microsoft's internal chip program is years behind schedule. Anthropic's approach — combining its Trainium partnership with AWS, its own silicon research, and a multi-vendor strategy — is reportedly enabling it to deliver equivalent model quality at 30% to 60% lower cost per token than competitors.
That kind of margin advantage compounds. Lower cost per token means more training runs for the same budget, which means faster iteration. The compute layer is increasingly where AI races are won or lost.
Also from Anthropic this week: a new research paper on the labor market impacts of AI, presenting a measurement framework for tracking employment disruption. The honest framing from the paper is that the impacts may eventually become unmistakable but are currently difficult to isolate from other economic factors. Worth reading if you are thinking about where this goes over the next three to five years.
The Bigger Picture
Twelve days. Two major funding rounds totalling over $111 billion. A government-level standoff over AI ethics and weapons use. The most capable language model ever shipped. Forty percent workforce reductions at a major fintech. Open-source models closing the gap with frontier closed models. A hardware executive resigning over Pentagon deals.
The pace is not slowing. If anything, the pressure is increasing — on labs to pick sides, on companies to restructure around AI, and on developers to figure out which tools are worth trusting for the long haul.
We are building at BuildrLab with the assumption that the tooling layer matters as much as the models. That is still where we are placing our bets. More soon.
Top comments (0)