Originally published at allenarch.dev
On November 7, 2025, Nvidia CEO Jensen Huang made it official: Blackwell chips are a no‑sell to China.
"Currently, we are not planning to ship anything to China," Huang told Reuters. No active discussions. No workarounds. No China‑spec'd variants coming.
Meanwhile, Beijing fired back with its own directive: state‑funded data centers must switch to domestic AI chips. Projects less than 30% complete? Pull out the foreign silicon or lose funding.
The AI compute map just split in two.
Key developments this week:
- Nvidia says it has "zero share" of China's AI datacenter compute market (Reuters, Nov 7)
- U.S. poised to block even scaled‑down Blackwell variants, ending the China‑only SKU playbook (Reuters, Nov 7)
- China mandates domestic chips in state‑funded DCs; projects <30% complete must remove foreign hardware (Reuters, Nov 7)
- Senators urge White House to maintain the ban on advanced AI chips to China (The Verge, Nov 6)
- Huawei's Ascend 910C readies mass shipments as China's domestic alternative (Reuters, Apr 21)
Here's what this actually means
Nvidia's China revenue just evaporated. The money's going somewhere else.
Nvidia's China revenue: gone.
The company already excluded China H20 shipments from its guidance back in August. CFO Colette Kress told analysts the "bottleneck is diplomacy," not demand. Now it's official: zero datacenter compute share in the world's second‑largest AI market.
To put that in perspective: China accounted for 21.4% of Nvidia's total revenue in FY2023 before export controls tightened. By FY2025, that share dropped to 13.1% ($17.1 billion), and datacenter-specific sales are now effectively zero. That's an estimated $15+ billion in annual datacenter revenue evaporated.
Source: Reuters, Aug 27 | Nvidia FY2025 financials
Demand doesn't disappear it relocates.
Blackwell capacity that would've gone to China is being rerouted to:
- U.S. hyperscalers (Microsoft, Google, Meta) scaling trillion‑parameter models
- Sovereign AI programs in the Middle East (UAE, Saudi Arabia buying at premium)
- European model labs (Mistral, Aleph Alpha) competing for slots
The result? Lead times stretched, prices firm, and buyers with early allocations holding leverage.
China accelerates substitution.
Huawei's Ascend 910C is positioning as the domestic answer. Reuters reported in April that Huawei is prepping mass shipments, with plans to double output. The chip targets similar workloads as Nvidia's H100 not Blackwell‑level, but "good enough" for many Chinese AI labs.
The catch: software ecosystem. Nvidia's CUDA has 15+ years of tooling, libraries, and developer mindshare. Huawei's CANN framework is catching up, but the gap is real. Training a frontier model on Ascend means rewriting code, debugging new performance profiles, and accepting slower iteration cycles for now.
Source: Reuters, Apr 21
How we got here
The current export controls trace back to October 2022, when the Bureau of Industry and Security (BIS) first restricted advanced computing chips and semiconductor manufacturing equipment to China.
Oct 2022: Initial rules target GPUs for AI and supercomputing. Nvidia scrambles to create China‑spec variants (A800, H800) that technically comply but deliver similar performance.
Oct 2023: BIS tightens the rules. New thresholds close loopholes. Performance metrics now account for interconnect bandwidth, not just raw FLOPS. The A/H‑series workaround gets nuked.
Source: BIS, Oct 7, 2022 · BIS, Oct 17, 2023
Nov 2025: Blackwell enters the picture. This time, no workarounds. The Information reports the U.S. will block even scaled‑down variants before they ship. Huang confirms: not planning to ship anything.
The playbook that worked for A/H‑series is dead.
Who's winning (and who's scrambling)
The clear winners: anyone who locked Blackwell capacity early.
Microsoft, Google, Meta they secured multi-billion-dollar commitments months ago. Now that China demand is offline, they're not competing for scarce supply. Lead times for new buyers? Stretching into 2026.
Sovereign AI programs in the Middle East are also winning. UAE and Saudi buyers are paying premium for early slots, betting that AI infrastructure is strategic national priority. When compute is the new oil, you pay whatever it takes.
China: forced sprint to self-sufficiency.
Huawei's Ascend 910C becomes the default. But here's the reality check: it's not Blackwell-class. Performance reaches 60% of H100 for inference workloads, though training large-scale models remains more challenging.
The bigger issue is software. CUDA has 15 years of ecosystem. CANN (Huawei's framework) works, but you're rewriting code, debugging new bottlenecks, and accepting slower iteration. For Chinese AI labs racing to keep up with frontier models, that's a meaningful handicap.
Still, China has options the U.S. doesn't: state funding, captive market, and willingness to play the long game. Ascend might be 60% of H100 performance today, but in 3-5 years? The gap could narrow.
Nvidia: short-term pain, long-term TBD.
Zero China datacenter revenue hurts. But demand ex-China is strong enough that Blackwell is supply-constrained anyway. The real question is 2027-2028: if China builds a competitive domestic stack, does Nvidia lose strategic leverage permanently?
For now, Nvidia is fine. The company guided for record revenue without China. But this is the first time in modern tech history that the world's second-largest market is completely off-limits for cutting-edge silicon.
Meanwhile, Google is building something different
While everyone's watching the Nvidia-China standoff, Google is quietly advancing its own compute stack and it might be the smartest hedge in AI infrastructure.
The setup: Axion CPUs + next-gen TPU pods.
Google's been pushing its custom silicon for years (TPUs have been around since 2016), but 2025 marks a shift in strategy. The company is positioning its "AI Hypercomputer" platform not as a Nvidia replacement, but as a workload-optimized alternative.
The pitch: why rent scarce Nvidia capacity at premium when you can run certain workloads faster and cheaper on purpose-built hardware?
What TPUs are actually good at.
TPUs excel at specific tasks:
- Inference at scale (serving models to millions of users, Google's bread and butter)
- Training Transformer-based models (the architecture Google invented)
- Batch processing where you can saturate the chip's matrix multiply units
What they're not great at: general-purpose GPU workloads, highly dynamic graphs, or code that's deeply optimized for CUDA.
The trade-off: if your workload fits TPU's sweet spot, you get better price-performance. If it doesn't, you're rewriting code and debugging performance regressions.
Why this matters right now.
Nvidia Blackwell supply is constrained. Lead times are 12-18 months. Prices are firm.
Google TPU? Available through Google Cloud with no waitlist. Pricing is competitive (and Google controls it).
For companies building on Transformer models (basically everyone doing LLMs, vision transformers, or diffusion models), TPU is a viable alternative especially if you're already on GCP.
The strategic angle.
Google isn't trying to beat Nvidia in the open market. It's creating an alternative ecosystem:
- Hyperscalers (Microsoft, Meta) lock in Nvidia capacity through multi-billion-dollar deals
- Google builds proprietary stack, keeps capacity for itself and select cloud customers
- Enterprise buyers caught in the middle get optionality: pay Nvidia premium or retool for TPU
This is the same playbook AWS ran with Graviton (Arm CPUs) and Trainium (AI training chips). Build your own silicon, control your destiny, offer customers a lower-cost path that happens to lock them into your cloud.
The difference: Google's TPU has a 9-year head start and real production workloads (Search, YouTube, Translate all run on TPU).
The catch: ecosystem lock-in.
Switching from Nvidia to TPU isn't trivial. You're trading:
- CUDA (15+ years, massive library ecosystem) for JAX/XLA (Google's framework)
- Portable code that runs anywhere for Google Cloud vendor lock-in
- Broad hardware support for optimized-but-narrow use cases
For startups, that's a risky bet. For Google Cloud customers already committed? It's a natural hedge against Nvidia supply risk.
The bottom line on TPU.
Google's compute push isn't about displacing Nvidia. It's about carving out workloads where purpose-built silicon wins on economics and availability.
As Nvidia prioritizes hyperscaler deals and China gets locked out, Google's alternative stack becomes more attractive not because it's better across the board, but because it's available and predictable.
That's a compelling pitch when the alternative is waiting 18 months for Blackwell allocation.
What this means if you're...
If you're building AI products:
Blackwell supply is tight. If you don't have allocation locked, you're either paying premium on secondary market or waiting until mid-2026. Budget accordingly.
Alternatives exist Google TPU pods, AMD MI300X, even cloud providers reselling capacity but switching costs are real. CUDA code doesn't magically run on TPU.
If you're in China:
Your compute options just narrowed. Ascend 910C is the domestic play, but expect tooling friction. If you're training frontier models, prepare for longer iteration cycles and higher engineering overhead.
The upside: China's government is pouring money into domestic silicon. HBM production, packaging, software toolchains all getting funded. The ecosystem will mature, question is how fast.
If you're watching geopolitics:
This is tech decoupling in real-time. Two separate AI compute ecosystems are forming:
- West: Nvidia/AMD/Intel + Google TPU, optimized for performance, backed by hyperscaler capital
- China: Ascend/Kunlun/Cambricon, optimized for sovereignty, backed by state funding
They'll diverge on hardware, software, and standards. Interoperability will fade. Think of it like 5G equipment Huawei and Ericsson serve different markets now.
Four things to watch
1. BIS rule updates (Q1 2026)
Watch for new performance thresholds or licensing carve-outs. If BIS tightens further, even older-gen GPUs (A100, H100) could get restricted. If they ease, limited Blackwell variants might ship under strict monitoring.
2. China procurement mandates
Beijing's directive to state-funded DCs is just the start. Expect expanded mandates: provincial governments, SOEs, universities. The domestic chip push will accelerate.
3. Nvidia's product segmentation
Will Nvidia create Blackwell-lite SKUs for markets outside China but below full-spec? Or is the company done with workaround products? Product roadmap will signal strategy.
4. Huawei shipment volumes
If Ascend 910C scales to tens of thousands of units per quarter, China's substitution is working. If volumes stay low, the domestic stack isn't ready for prime time.
Why Blackwell actually matters
Blackwell isn't just "faster H100." It's a generational leap designed for trillion-parameter models and real-time inference.
Key specs:
- 208 billion transistors across two reticle-limited dies, connected via 10 TB/s chip-to-chip link
- 2nd-gen Transformer Engine with FP4 precision support (faster inference, lower memory)
- 5th-gen NVLink enabling rack-scale systems (72 GPUs in single coherent domain)
- Confidential computing for secure multi-tenant AI workloads
- Native decompression engine for database analytics (non-AI workload expansion)
Real-world impact:
The GB200 NVL72 system (72 Blackwell GPUs + 36 Grace CPUs in a rack) can serve trillion-parameter models in real-time. That's the scale needed for GPT-5-class systems or multi-modal models processing video + text simultaneously.
For context: training GPT-4 took thousands of A100 GPUs over months. Blackwell systems can do similar runs in weeks, at lower power draw per FLOP.
This is why the export ban matters. China's AI labs just lost access to the fastest iteration cycles in the industry.
Source: Nvidia Blackwell Architecture · DGX B200 · GB200 NVL72
The real question nobody's asking
The AI compute map just redrew itself along geopolitical lines.
Nvidia's Blackwell stays in the West. China builds its own stack with Huawei Ascend. Two ecosystems, increasingly incompatible.
For Nvidia, it's short-term pain (zero China revenue) but manageable ex-China demand is strong enough to absorb supply. For China, it's forced acceleration of domestic alternatives that deliver roughly 60% of H100 performance today but could close the gap in 3-5 years.
The real wildcard: what happens when China's AI labs locked out of cutting-edge hardware start optimizing models for lower compute budgets? Efficiency gains could flip the script. Compute abundance breeds inefficiency; scarcity breeds innovation.
We'll know by 2027 whether this export control strategy worked or backfired.
Resources:
- Nvidia CEO: no plans to ship to China
- U.S. to block scaled-down AI chip variants to China
- White House confirms Blackwell won't be sold to China
- Nvidia still growing in China, but uncertainty clouds outlook
- Senators urge Trump to continue banning Nvidia chips in China
- Nvidia Blackwell a no-sell in China as trade deal fails
- Huawei readies new AI chip for mass shipment as China seeks Nvidia alternatives
- BIS: Commerce Implements New Export Controls (Oct 2022)
- BIS: Commerce Strengthens Export Controls (Oct 2023)
- Nvidia Blackwell Architecture
- Nvidia DGX B200
- Nvidia GB200 NVL72
Further Reading:
- Meta's $75B AI Infrastructure Bet
- After Meta's $75B Bet: A $30B Bond Deal and Wall Street's Harsh Reality Check
- DeepSeek-OCR: When a Picture Is Worth 10× Fewer Tokens
Connect
- GitHub: @0xReLogic
- LinkedIn: Allen Elzayn
Top comments (0)