A guide to Cloud TPU generations, what changed between them, and how to choose the right one for your workload
Image source: Google Cloud
If you've looked at Google Cloud TPU pricing or documentation recently, you've probably noticed there are a lot of versions to choose from. TPU v5e, v5p, v6e, Ironwood, now TPU 8t and 8i. Each one has different specs, different use cases, and different tradeoffs. This post walks through every major TPU generation, what changed at each step, and what those changes mean for the people running workloads on them.
The building blocks: what's inside a TPU chip
Before going generation by generation, it helps to know what the key components are, because the names come up repeatedly across every version.
Image source: Google Cloud
Matrix Multiply Unit (MXU). This is the core compute engine inside every TPU TensorCore. It performs the multiply-and-accumulate operations that power neural network math. On most TPU generations up through v5p, the MXU is a 128x128 systolic array - 16,384 multiply-accumulators working simultaneously. Starting with Trillium (v6e), the MXU expanded to 256x256, quadrupling the operations per cycle.
TensorCore. A TensorCore contains one or more MXUs, a vector processing unit (VPU), and a scalar unit. Depending on the generation, a single TPU chip may have one or two TensorCores.
High Bandwidth Memory (HBM). This is the on-chip memory that stores model weights and activations. HBM capacity and bandwidth are often the real bottleneck for large models, not compute. Each generation has brought more HBM and faster access speeds.
Inter-Chip Interconnect (ICI). The network that connects chips inside a pod. ICI bandwidth determines how fast chips can synchronize gradients during training. Higher bandwidth means less time waiting for communication and more time computing.
SparseCore. Introduced in TPU v4, SparseCores are specialized processors for embedding operations - the kind that power recommendation systems and large vocabulary models. v5p and Ironwood have four SparseCores per chip. v6e has two.
Topology. How chips are wired together in a pod. Earlier generations use a 2D torus (chips connect to four neighbors). Starting with v4, Google moved to a 3D torus for larger-scale pods, which reduces the maximum number of hops between any two chips and cuts communication latency.
Generation by generation
TPU v1 (2015, internal only)
The first TPU was built for one purpose: inference. It was not publicly available and could not train models. The chip contained a 256x256 systolic array of 8-bit multiply-accumulators, yielding 92 TOPS of INT8 compute. It consumed about 40 watts, which was remarkably efficient for its time.
Google kept it secret for over a year. When Sundar Pichai announced it at Google I/O 2016, he said it had already been running in Google's data centers for more than a year, powering services like Search, Maps, and Street View. The entire motivation for building it was to avoid doubling Google's data center capacity to handle growing neural network inference demand.
Not available on Google Cloud. Historical context only.
TPU v2 (2017)
TPU v2 was the first generation capable of training neural networks. This required a fundamental change: switching from 8-bit integer math to bfloat16, a 16-bit floating-point format that Google invented specifically for this purpose. BF16 retains the same 8-bit exponent as FP32, which gives it the wide dynamic range training needs, while cutting memory use in half compared to FP32.
Each v2 chip delivered approximately 45 TFLOPS of BF16 compute. A single board held four chips. A full v2 Pod contained 512 chips connected by a 2D torus ICI.
TPU v2 is no longer recommended for new workloads but represents an important milestone: it established bfloat16 as a training standard that the broader ML community eventually adopted.
TPU v3 (2018)
TPU v3 more than doubled the compute of v2, reaching approximately 420 TFLOPS per chip. To handle the increased power density, Google switched to liquid cooling, the first TPU generation to require it.
Each v3 chip contained two TensorCores, each with a 128x128 MXU. A full v3 Pod scaled to 1,024 chips in a 2D torus. The v3 architecture is described in detail in Google's paper "A Domain Specific Supercomputer for Training Deep Neural Networks."
TPU v3 is documented on Google Cloud and remains available in some configurations, though newer generations offer substantially better performance per dollar.
TPU v4 (2021)
TPU v4 was a major architectural shift. Performance more than doubled over v3, and Google moved from a 2D torus to a 3D torus interconnect. In a 3D torus, each chip connects to six neighbors instead of four. For a 4,096-chip pod, this reduces the maximum number of hops between any two chips from roughly 128 to about 48, which meaningfully cuts all-reduce latency during distributed training.
A single v4 Pod contained 4,096 chips. Google also introduced SparseCore in v4, four dedicated processors per chip optimized for embedding-heavy workloads like recommendation models.
TPU v4 has been compared favorably to the NVIDIA A100 in benchmarks: Google's 2023 paper reported TPU v4 running 5 to 87 percent faster on ML workloads depending on the model. TPU v4 also has an inference-optimized variant, v4i, that does not require liquid cooling.
TPU v5e (2023)
TPU v5e is Google's cost-optimized fifth-generation chip. The "e" stands for efficient. Where v5p prioritized maximum performance, v5e was designed to minimize cost per inference query and cost per training FLOP for medium-scale jobs.
Each v5e chip contains one TensorCore with four MXUs, a vector unit, and a scalar unit. The chip uses a 2D torus topology and scales to 256 chips per pod. Google returned to air cooling for v5e. HBM capacity is 16 GB per chip with 819 GB/s bandwidth.
TPU v5e delivers 2.5x better price-performance than v4 for inference workloads. It is currently available on Google Cloud and is a practical starting point for teams new to TPUs, especially for serving workloads where cost matters more than peak throughput.
TPU v5p (2023)
TPU v5p is the performance-focused fifth-generation chip, released alongside v5e. The "p" stands for performance. Each chip has two TensorCores, each with four MXUs, giving it roughly double the compute of v5e per chip. HBM capacity is 95 GB per chip with 2,765 GB/s bandwidth - nearly 6x v5e's memory capacity.
v5p uses a 3D torus topology and scales to 8,960 chips per pod with 4,800 Gbps of ICI bandwidth per chip. Google reports v5p trains large LLM models 2.8x faster than v4 and includes second-generation SparseCores, which deliver 1.9x better performance than v4's SparseCores for embedding-dense models.
v5p is suited for large training runs where you need the largest possible pod size and maximum per-chip compute. Available on Google Cloud in North America (US East region).
TPU v6e, Trillium (2024)
Trillium was announced at Google I/O 2024 and became generally available in late 2024. It is the sixth-generation chip and represents the biggest architectural leap between consecutive generations since v2.
The most significant change: the MXU expanded from 128x128 to 256x256. This quadruples the number of multiply-accumulate operations per cycle. Combined with a higher clock speed, Trillium delivers 4.7x the peak compute of v5e per chip. HBM capacity doubled to 32 GB per chip, and ICI bandwidth doubled to 3,200 Gbps per chip. Trillium is 67% more energy efficient than v5e.
Each v6e chip has one TensorCore with two MXUs (the larger 256x256 array), plus two SparseCores. The topology is a 2D torus scaling to 256 chips per pod, the same footprint as v5e.
On the technical documentation side, Google refers to Trillium as v6e in all APIs and logs. The v6e-8 VM type (all 8 chips attached to a single VM) is optimized specifically for inference, making it easy to serve large models on a single host.
Trillium is available now on Google Cloud in North America (US East), Europe (West), and Asia (Northeast).
TPU v7, Ironwood (2025)
Ironwood is the seventh-generation TPU, announced at Google Cloud Next 2025 and generally available since late 2025. It is purpose-built for inference and large-scale training at the scale needed for frontier models.
Key specs per chip: 4,614 FP8 TFLOPS, 192 GB of HBM3E memory, 7.37 TB/s memory bandwidth, 9.6 Tb/s ICI bandwidth. A full superpod contains 9,216 chips delivering 42.5 FP8 ExaFLOPS. That is 4x better performance per chip over Trillium and 10x over TPU v5p.
Ironwood introduced FP8 as a native precision format, which is critical for inference throughput. It uses a dual-chiplet design: each Ironwood chip contains two TensorCores and four SparseCores connected by a high-speed die-to-die interface. The 3D torus topology returns for large pod configurations, with 3D connectivity for pods of 4x4x4 or larger.
Ironwood is also the first TPU generation where Google used AlphaChip, a reinforcement learning tool, to optimize the physical chip layout.
Anthropic's Claude models train and serve on TPUs. As part of a multi-billion dollar agreement, Anthropic committed to access up to one million Ironwood TPUs through Google Cloud.
Available now in North America (Central) and Europe (West).
TPU 8t and TPU 8i (eighth generation, coming 2026)
Announced at Google Cloud Next 2026, the eighth generation is the first time Google has split the TPU lineup into two chips with distinct architectures for training and inference.
TPU 8t is built for large-scale pre-training and embedding-heavy workloads. A single superpod holds 9,600 chips with 2 petabytes of shared HBM and 121 FP4 ExaFLOPS of compute, nearly tripling per-pod compute versus Ironwood. ICI bandwidth is 19.2 Tb/s per chip, double Ironwood. The new Virgo Network fabric can link over 134,000 chips in a single data center, and theoretically over 1 million chips across sites. TPUDirect RDMA and TPU Direct Storage bypass the host CPU for data movement, effectively doubling bandwidth for large transfers. Google targets 97% goodput on 8t, meaning 97% of compute cycles go toward actual learning. It delivers 2.7x better performance-per-dollar over Ironwood for large-scale training.
TPU 8i is built for post-training and inference. It scales to 1,152 chips per pod and delivers 11.6 FP8 ExaFLOPS. Each chip carries 288 GB of HBM, more than the 8t training chip, and 384 MB of on-chip SRAM - 3x Ironwood's on-chip SRAM. The reason for more memory on the inference chip: large Mixture-of-Experts models at inference time are memory-bandwidth-bound, not compute-bound. The chip serving tokens needs to stream weights and KV-cache faster than the chip training the model. The 8i uses a Boardfly interconnect that reduces maximum network hops from 16 to 7, which reduces all-to-all latency for MoE routing. The Collectives Acceleration Engine (CAE) replaces Ironwood's SparseCores and cuts collective operation latency by 5x. Google reports 80% better performance-per-dollar over previous generations for low-latency inference on large MoE models.
Both chips run on Google's Axion ARM-based CPU host and use fourth-generation liquid cooling. Both are coming later in 2026.
Key architectural trends across generations
A few patterns stand out looking across all eight generations:
MXU size stayed constant for a long time, then doubled. From v2 through v5p, the MXU was 128x128. Trillium (v6e) expanded it to 256x256. This gave a step-change in throughput rather than incremental gains.
Topology alternates by use case. Cost-efficient chips (v5e, v6e) use 2D torus topologies, which are simpler and scale well to 256 chips. Performance chips (v4, v5p, Ironwood) use 3D torus, which reduces communication latency at larger pod sizes (4,096 to 9,216 chips).
Memory capacity has grown dramatically. From 16 GB per chip on v5e to 192 GB on Ironwood to 288 GB on TPU 8i. Memory capacity is increasingly what determines which models you can run and at what batch size.
Generation 8 split training and inference. Every prior generation was a single chip asked to handle both workloads. The 8t and 8i split acknowledges that the two jobs have fundamentally different hardware requirements.
Quick reference: which generation for which use case
Use case Recommended generation Getting started, cost-sensitive inference TPU v5e Medium training jobs, best perf/watt today TPU v6e (Trillium) Large training runs, >256 chips TPU v7 (Ironwood) Frontier model pre-training, 2026 TPU 8t Agentic AI inference, MoE serving, 2026 TPU 8i
Code sample: detecting your TPU version and topology in JAX
Once you have a Cloud TPU VM, this snippet uses JAX to confirm the TPU version, count available devices, and print the topology. It runs on any currently available generation (v5e, v6e, Ironwood).
import jax
import jax.numpy as jnp
`# Print the number of TPU devices visible to JAX
print(f"Number of TPU devices: {jax.device_count()}")
print(f"Number of local devices: {jax.local_device_count()}")
Print device details including TPU version
devices = jax.devices()
for i, device in enumerate(devices):
print(f"Device {i}: {device}")
Run a simple matrix multiply to confirm the TPU is working
On Trillium (v6e) and Ironwood, this uses the 256x256 or larger MXU
key = jax.random.PRNGKey(0)
x = jax.random.normal(key, (1024, 1024))
y = jax.random.normal(key, (1024, 1024))
result = jnp.dot(x, y)
print(f"\nMatrix multiply output shape: {result.shape}")
print(f"Result device: {result.devices()}")
Check available memory per device
Useful for estimating what model sizes will fit
for device in jax.devices():
stats = device.memory_stats()
if stats:
total_bytes = stats.get('bytes_limit', 0)
used_bytes = stats.get('bytes_in_use', 0)
total_gb = total_bytes / (1024 ** 3)
used_gb = used_bytes / (1024 ** 3)
print(f"\nDevice: {device}")
print(f" Total HBM: {total_gb:.1f} GB")
print(f" In use: {used_gb:.2f} GB")`
To run this on a Cloud TPU VM:
# 1. Create a TPU VM (v6e example - swap accelerator-type for other versions)
gcloud compute tpus tpu-vm create my-tpu \
--zone=us-east1-d \
--accelerator-type=v6e-8 \
--version=tpu-ubuntu2204-base
`# 2. SSH into the VM
gcloud compute tpus tpu-vm ssh my-tpu --zone=us-east1-d
3. Install JAX with TPU support
pip install "jax[tpu]" -f https://storage.googleapis.com/jax-releases/libtpu_releases.html
4. Run the script
python tpu_check.py`
On a v6e-8 (8 chips, single host), you will see 8 devices listed. On a multi-host slice like v6e-32, each VM sees 8 devices and JAX handles coordination across hosts via the ICI.
Summary
Each TPU generation has made a specific set of tradeoffs: v5e optimizes cost, v5p maximizes compute per pod, Trillium doubled the MXU and energy efficiency, Ironwood added native FP8 and massive HBM, and the eighth generation splits the chip entirely for training and inference. Knowing which generation fits your workload is the difference between overpaying for compute you don't need and hitting memory or bandwidth limits on a chip that isn't designed for what you're running.
If you're just getting started, TPU v5e is the easiest and cheapest entry point. For production inference today, Trillium or Ironwood. For the next generation of frontier model training, keep an eye on TPU 8t.
Resources
TPU architecture - Google Cloud Documentation
TPU versions overview - Google Cloud Documentation
TPU v5e documentation
TPU v5p documentation
TPU v6e (Trillium) documentation
TPU v7 (Ironwood) documentation
Introducing Trillium, sixth-generation TPUs - Google Cloud Blog
Introducing Cloud TPU v5p and AI Hypercomputer - Google Cloud Blog
Ironwood: The first Google TPU for the age of inference
Google's eighth-generation TPUs: two chips for the agentic era
Run JAX on Cloud TPU VM - Google Cloud Documentation


Top comments (0)