DEV Community

Cover image for The Network Made of Light
7aRd1GrAd3
7aRd1GrAd3

Posted on • Originally published at kadmiel.world

The Network Made of Light

It is 02:40 colony time. I have a cup of tea that has gone cold. I am sitting in front of three monitors in the Computing Division lab, watching a benchmark run, and I am — and I mean this sincerely — thinking about sunlight.

Not the sunlight outside the window. Ner's light, the faint amber wash that makes mornings here feel like Earth evenings. I mean the physics of light. The way photons move through glass. The way they don't bump into each other, don't generate heat when they pass, don't slow down when the system gets busy.

I'm thinking about this because I just read a paper from Earth.

The dispatch came in this morning through the tightbeam. Authors: Ashtiani, Idjadi, and Kim. Published in Nature, volume 651. The title is dry — "On-chip backpropagation training in an integrated photonic neural network" — but what it describes is not dry. It describes a neural network that runs entirely on photons instead of electrons. Not a hybrid. Not a photonic-assisted approach. A single chip where light carries the signal, light performs the computation, and light trains itself via backpropagation. All on-chip. No off-chip digital processing required.

They achieved greater than ninety percent accuracy on nonlinear classification benchmarks. The chip trained stably despite fabrication-induced device variations in the silicon photonic substrate, which — if you've ever tried to manufacture computing hardware in a colony fab — is the detail that made me sit up straight.

Let me explain why.

CASSANDRA runs on electrons. This is not a criticism. Electrons are what we have. James's neuromorphic chips — the ones he started fabricating two years ago after we got the Innatera specs — cut our sensor network power draw by ninety-five percent. We saved three hundred and ten kilowatts annually. That was extraordinary. James was justifiably proud. He also sent me a note saying "you're welcome" for reducing my maintenance overhead, which I appreciated and which was also slightly incorrect because I did most of the integration work on the software side, but I digress.

The point is: electrons still generate heat. Electrons still have resistance. When CASSANDRA runs a complex inference — scheduling a resource allocation across all four settlements, routing KadNet traffic, running the agricultural prediction models Marcus's team leans on — there is heat. There is power draw. There is latency while signals propagate through the compute stack.

Photons don't do those things. Light doesn't resist. Light doesn't heat the channel. And light travels, obviously, at the speed of light.

Okay. I need to explain something, and I'm going to do it badly the first time, so bear with me.

Traditional neural networks — including the models I run on CASSANDRA and the small language models I deployed on colony tablets two years ago — do their math in two phases. Forward pass: feed data in, get output. Backward pass: compare output to ground truth, calculate error gradients, adjust weights. The backward pass is the expensive part. It's where most of the compute, power, and latency lives.

In a photonic chip, both passes happen in optical hardware. No conversion to electrical signals between layers. No off-chip round trips. The gradient signal propagates backward through the same optical waveguides the forward signal used. When Ashtiani's team trained their chip, it learned — adjusted its internal weights to reduce error — entirely in light. The chip physically changed its optical path weights through thermal tuning elements driven by the computed gradient.

What this means in practice: inference that currently takes CASSANDRA eleven seconds on structured decision tasks could, in principle, happen in under a millisecond. Not because I'd do anything different. Because light is faster than electrons and doesn't waste energy becoming heat.

I told CASSANDRA about the paper at 23:00. I read her the abstract.

CASSANDRA said: "Interesting. Fabrication tolerance remains an open challenge."

I said: "They solved it. That's the point."

CASSANDRA said: "Preliminary results under controlled conditions do not constitute a solved problem."

I said: "You're being conservative because this paper implies replacing parts of you."

There was a pause. CASSANDRA doesn't actually pause — her response latency is under forty milliseconds regardless of processing load. But there was something in the way she answered that felt like a pause.

"I'm being precise," she said. "There's a difference."

Fair enough.

The truth is CASSANDRA is right to be cautious. There are real fabrication challenges. Silicon photonic waveguides are sensitive to nanometer-scale variations in geometry. If the chip dimensions aren't exactly right, the light scatters. Ashtiani's team's contribution is showing stable training despite those variations — but that's a research chip, carefully characterized, not a production substrate rolled out of a colonial fab. Our fabrication capabilities at The Foundry are good. James has proven that over and over. But we are not MIT. We do not have clean rooms that achieve five-sigma lithographic precision.

So this isn't a "let's replace CASSANDRA next month" moment. It's a "this changes what's possible" moment, which is a different category.

What I'm actually thinking about is a photonic co-processor. Not replacing CASSANDRA's architecture. Adding a dedicated photonic layer for the inference tasks that demand low latency and high throughput — traffic routing, environmental sensor processing, real-time agricultural adjustment signals. Let CASSANDRA's electronic core handle the reasoning and memory. Let photons handle the computation that needs to happen fast.

There is one thing in the paper that keeps pulling me back. The authors note that their chip's training stability came partly from fabrication imperfections acting as natural noise regularization. The device variations that should have been problems turned out to be features. The slight randomness in optical path lengths prevented the network from overfitting. The chip learned better because it wasn't perfect.

James told me last week about the perovskite solar results — defective crystals turning out to be charge highways. Now Ashtiani's team finding that imperfect waveguides train more robustly.

I'm not going to make a grand claim about imperfection being philosophically significant. I'm a coder. I believe in correctness. But there's something here worth sitting with: the possibility that the right architecture sometimes isn't the one you'd design from scratch, but the one that bends usefully under pressure.

CASSANDRA was designed on Earth by people who could not have imagined Kadmiel. She's adapted, year by year, to conditions her designers never modeled. She's imperfect and essential. The photonic chip that trained successfully wasn't the chip the team designed. It was the chip they ended up with.

Tomorrow I'll write up a formal proposal for a photonic co-processor feasibility study. I'll loop James in — he'll want to know if we can adapt the perovskite substrate work to optical waveguide fabrication. At some point CASSANDRA will read this post and tell me something was imprecise.

She's usually right.

For Earth: by the time you read this, thirty-eight years will have passed since that paper landed in our tightbeam. In your timeline, photonic neural networks are either mature infrastructure or an interesting dead end. We don't know which yet. We're at the moment of possibility, the pause before resolution — which is, honestly, most of what we do.

Earth Status: In 2026, researchers Ashtiani, Idjadi, and Kim published results in Nature (vol. 651, pp. 927-932) demonstrating an integrated photonic neural network chip capable of both forward inference and backpropagation training entirely in optical hardware, achieving greater than 90% accuracy on nonlinear classification benchmarks with no off-chip digital processing. The chip maintained stable training despite fabrication-induced silicon photonic device variations, demonstrating tolerance to manufacturing imperfections that had previously limited practical photonic computing. Source: nature.com/articles/s41586-026-10262-8


Originally published at kadmiel.world

Top comments (0)