From atomic-scale transistors to chips made of light — here is what comes after the 2nm revolution, and why it matters for everything from your smartphone to artificial general intelligence.
Photo by Laura Ockel on Unsplash
In Q4 2025, TSMC confirmed volume production of its N2 node. At 2nm, transistor gates are approximately 10 silicon atoms wide. That is not a metaphor for "very small" — it is a regime where quantum tunnelling, variability at the atomic scale, and statistical dopant fluctuations are no longer edge cases. They are the design constraints.
The engineering community has spent decades treating Moore's Law as a roadmap. What comes next is not one road. It is six, running in parallel.
1. Gate-All-Around (GAA) Transistors
FinFETs gave the gate three sides of control over the channel. GAA wraps it around all four sides of horizontally stacked silicon nanosheets — typically 5–8 ribbons, each 5nm thick, separated by high-k dielectric.
The physics: improved electrostatic gate control means steeper subthreshold slope, lower off-state leakage current (I_off), and the ability to tune drive current (I_on) by adjusting nanosheet width at the mask level — something FinFETs could not do without a full process change.
TSMC N2: 10–15% speed gain at iso-power, or 25–30% power reduction at iso-performance vs N3E. Gate pitch ~45nm, metal pitch ~24nm.
Intel 18A: Combines RibbonFET (GAA) with Backside Power Delivery Network (BSPDN) — PowerVia. Routing Vdd and Vss on the back of the wafer eliminates IR drop from power rails competing with signal routing on the front. Result: ~6% performance gain from BSPDN alone, plus freed routing tracks for signal density.
Samsung SF3: Implemented GAA at 3nm in 2022 — earliest production GAA — but yield challenges limited the advantage. SF2 (2nm-class) targets correction in 2025.
Next milestones: TSMC A16 (backside power + GAA, 2027), Intel 14A (first High-NA EUV in full production, 2027), IMEC roadmap to "A2" — 2 angstroms — by 2036.
2. 3D Integration: Chiplets and Hybrid Bonding
Monolithic scaling hits yield walls fast — defect density is roughly constant per unit area, so doubling die area roughly halves yield. Chiplets solve this by partitioning a design into smaller dies, each manufactured at the process node best suited to it, then integrated in-package.
The interconnect hierarchy matters:
| Interconnect Type | Bump Pitch | Bandwidth Density |
|---|---|---|
| Organic substrate | ~100µm | ~1 GB/s/mm² |
| Silicon interposer (CoWoS) | ~10µm | ~1 TB/s/mm² |
| Hybrid bonding (SoIC, Foveros Direct) | ~1µm | ~10+ TB/s/mm² |
At 1µm hybrid bond pitch, a 100mm² interface carries ~1 Pb/s of theoretical bandwidth — orders of magnitude beyond anything a PCIe or HBM interface achieves off-package.
Nvidia's Blackwell B100 connects two reticle-limited dies via NV-HBI at 10 TB/s with ~900 GB/s of HBM3e memory bandwidth. The future AI accelerator likely stacks a logic die (leading-edge node), HBM (DRAM-optimised node), and a photonics die (specialised process) — heterogeneous integration as the norm.
3. Silicon Photonics and Co-Packaged Optics
The bandwidth-per-watt of copper interconnects degrades sharply beyond ~1–2m. At rack scale in AI clusters, this is the bottleneck — not the GPU.
Silicon photonics builds optical components — ring modulators, Mach-Zehnder interferometers, germanium photodetectors, grating couplers — on standard 300mm CMOS wafers. Data modulates onto light at 50–100 Gbps per wavelength; WDM stacks 8–32 wavelengths per fibre, reaching multi-Tbps per physical link.
Co-Packaged Optics (CPO) eliminates the pluggable transceiver entirely — the optical engine is wire-bonded or hybrid-bonded directly to the switch ASIC. Nvidia's Quantum-X800 and Spectrum-X800, launched in 2026, use CPO at 100–400 Tb/s aggregate, with 3.5x power efficiency improvement and 10x signal integrity improvement vs pluggable modules.
At rack scale, the bottleneck in AI computing is not the GPU — it is the copper wire. Light carries data at the speed of, well, light.
The research frontier: all-optical neural networks where matrix-vector multiplications — the core operation in transformer inference — are performed optically at the speed of light with near-zero dynamic power. MIT and University of Strathclyde groups are the ones to watch.
4. Wide-Bandgap Semiconductors: GaN and SiC
Silicon has a bandgap of ~1.1 eV. That limits its breakdown voltage, thermal conductivity, and electron saturation velocity. Wide-bandgap materials change those limits entirely:
| Property | Si | GaN | SiC |
|---|---|---|---|
| Bandgap (eV) | 1.1 | 3.4 | 3.3 |
| Breakdown field (MV/cm) | 0.3 | 3.3 | 2.5 |
| Electron mobility (cm²/Vs) | 1400 | 2000 (2DEG) | 900 |
| Thermal conductivity (W/mK) | 150 | 230 | 490 |
GaN exploits a 2D electron gas (2DEG) at the AlGaN/GaN heterojunction — a high-density, high-mobility channel that enables HEMT transistors switching at RF frequencies (mmWave 5G, radar) and power conversion at >90% efficiency.
SiC MOSFETs handle 650V–3.3kV switching for EV traction inverters, industrial motor drives, and grid infrastructure. SiC inverter switching losses are ~50% lower than equivalent silicon IGBTs. SiC market CAGR projected at >20% through 2030.
5. 2D Materials: Graphene and TMDs
The IEEE roadmap identifies 2D materials as the primary candidate for sub-1nm channel materials — at monolayer thickness (~0.3nm for MoS₂), the channel is physically immune to short-channel effects that plague thin-body silicon at equivalent dimensions.
Graphene: Zero bandgap limits its use as a transistor channel, but electron mobility (~200,000 cm²/Vs suspended, ~10,000–50,000 cm²/Vs on substrate) makes it exceptional for interconnects. Copper resistivity increases sharply below ~10nm wire width due to surface and grain boundary scattering. Graphene interconnects show 100x higher current density than copper at equivalent dimensions.
TMDs (MoS₂, WSe₂, WS₂): Semiconducting 2D materials with bandgaps of 1.0–2.0 eV at monolayer thickness. TSMC's research division has demonstrated stacked nanosheet GAA transistors with monolayer MoS₂ channels integrated into the exact architecture defining N2.
In 2025, a research team published a bismuth-based transistor at 0.1nm (angstrom node) — 40% faster and 3x more energy-efficient than leading silicon nodes in benchmarks.
Before graphene powers entire systems, it will make its impact in interconnects — the first real silicon-graphene hybrid applications are closer than most engineers think.
— Semiconductor Engineering, 2025
6. Neuromorphic Computing
Von Neumann architecture has a fundamental inefficiency: the memory wall. Every operation requires data to move between processor and memory — energy spent on data movement often exceeds energy spent on computation itself.
Neuromorphic chips co-locate memory and processing. Artificial neurons integrate input spikes over time; when membrane potential crosses threshold, they fire — asynchronous, event-driven, sparse. No clock. No fetch-decode-execute. Power consumption proportional to activity, not clock rate.
Intel Loihi 2: 1 million neurons, 120 million synapses. Demonstrated 1,000x energy reduction vs GPU on certain combinatorial optimisation problems.
Photonic neuromorphic: A VCSEL with optical feedback implements a leaky integrate-and-fire neuron at GHz spike rates — six orders of magnitude faster than biological neurons. University of Strathclyde demonstrated GHz-rate VCSEL spiking networks in 2023.
The convergence target: neuromorphic processors for sparse edge inference + quantum coprocessors for optimisation + classical cores for control flow. Heterogeneous in architecture, not just process node.
The Roadmap
| Timeframe | Milestones |
|---|---|
| 2025–2026 | GAA volume production (TSMC N2, Intel 18A). CPO switches (Nvidia). GaN/SiC mainstream. |
| 2027–2028 | TSMC A16 + backside power. Intel 14A + High-NA EUV. Rapidus 2nm. First commercial photonic AI accelerators. HBM4 widespread. |
| 2029–2032 | Sub-1nm nodes. 2D material transistors in pilot production. Graphene interconnects in leading-edge logic. Neuromorphic at edge scale. |
| 2033–2036+ | IMEC A2 (2 angstrom). Photonic-electronic co-integration standard. Quantum-classical hybrid systems commercial. |
Why It Matters for What We Build
The software abstractions we write against — memory models, compute primitives, communication layers — are all downstream of hardware architecture. As the hardware layer fragments into heterogeneous stacks of logic, memory, photonics, and neuromorphic accelerators, the programming models will have to follow.
The engineers who understand what is physically happening at the transistor, interconnect, and package level will be the ones who extract real performance from what comes next — not just call an API and hope.
If this was useful, drop a ❤️ or 🦄 — it helps others find the article.
Have a question about any of these technologies or want me to go deeper on one? Drop it in the comments — I read and reply to all of them.
Follow me here on Dev.to for more deep dives on semiconductor technology, AI hardware, and the engineering behind next-gen computing.
References
- TSMC 2nm Technology
- Intel 18A — Intel Newsroom
- Beyond the 2nm Horizon — ScienceDirect
- TSMC Roadmap — SemiWiki
- Photonic Neuromorphic Computing 2026 — PatSnap
- The Race to Replace Silicon — Semiconductor Engineering
- 2D Materials Roadmap — PresCouter
- TSMC 2D Materials Research
- Graphene Interconnects — IEEE Spectrum
- Neuromorphic Photonics — NIH/NCBI
- Future of Semiconductor Materials — Electronics360

Top comments (0)