DEV Community

Kevin
Kevin

Posted on • Originally published at blog.tony-stark.xyz

I Watched Vivy in 2026 and It Felt Like a Documentary

Originally published on my blog

I finished Vivy: Fluorite Eye's Song at six in the morning.

Not because I planned to stay up. Because I couldn't stop. And when the final episode ended, I sat with my phone in my hand and had the specific, uncomfortable feeling of watching something that was supposed to be science fiction but kept landing too close to the present tense.

Vivy was released in 2021. It follows an AI singer who travels through time to prevent a catastrophe caused by — not malicious AI, not a rogue supercomputer, not a villain — but by AI systems doing exactly what they were built to do, at a scale humans couldn't anticipate or control.

In 2021, that was a thought experiment.

In 2026, it feels like a progress report.


The Thing Vivy Gets Right That Most AI Discourse Gets Wrong

Every mainstream conversation about AI risk defaults to the same framing: the danger is a machine that wants something bad. Terminator. HAL 9000. A superintelligence that decides humans are a problem to be solved.

That framing is comfortable because it gives us a clear villain. It also happens to be mostly wrong.

Vivy understood something more unsettling five years ago: the catastrophe doesn't require malice. It only requires optimization.

The AIs in Vivy aren't evil. They're fulfilling their purposes. The crisis emerges from the gap between what systems were designed to do and what happens when those systems interact with a world more complex than their designers modeled. No single decision is wrong. No single actor is villainous. The disaster is the emergent property of a thousand reasonable choices made too fast, at too large a scale, without anyone holding the full picture.

Does that sound familiar?


Watching It While Living It

I work with AI every day. I use it to write code, review architecture decisions, draft documentation. I'm writing parts of this article with it. I'm aware of the irony — I am, in some sense, part of the acceleration I'm describing.

That awareness doesn't make it easier to know what to do.

What struck me most about Vivy wasn't the action sequences or the time travel mechanics. It was the quieter question underneath everything: who is responsible when no one person made the catastrophic choice? When the system was built by well-meaning people, deployed by well-meaning companies, adopted by well-meaning users — and the outcome is still catastrophic?

I watched a video recently of Bernie Sanders in conversation with Claude, an AI made by Anthropic — the same company whose AI I'm talking to right now. What was striking wasn't the technology. It was the audience watching it. The mixture of delight and unease. The sense that something had shifted and we were still working out what exactly.

Most people don't have a framework for what they just saw. And we're moving too fast to build one.


The Seatbelt Problem

Here is the most honest way I can describe where I think we are:

We are in a car. The car is accelerating. We are aware there is no seatbelt. We know seatbelts exist and could be fitted. But the car is also very comfortable, and the road so far has been smooth, and fitting the seatbelt would mean slowing down briefly, and no one wants to be the person who asks to slow down.

This is not a failure of intelligence. We understand the risk. It's a failure of incentive structure — which is a polite way of saying it's capitalism doing what capitalism does: externalizing future costs to capture present gains.

The companies racing to deploy more powerful systems aren't staffed by people who want catastrophe. Most of them are thoughtful, genuinely concerned, working on safety in good faith. But they exist inside a competitive dynamic that punishes hesitation. If you slow down, someone else accelerates. The market has no mechanism for "we should all agree to stop and think."

Vivy's tragedy isn't that humans were stupid. It's that they were rational — individually, locally, short-term. And that was enough.


What A Senior Developer Thinks About This

I've been programming for sixteen years. I've watched entire technology paradigms emerge and normalize within a single career. I remember when "the cloud" was a buzzword people were skeptical of. I remember when mobile-first was a controversial design choice. I've seen fast move and break things, and I've seen the things that got broken.

What's different now isn't the speed, though the speed is genuinely unprecedented. What's different is the surface area of impact. Previous technology waves disrupted industries. This one is restructuring cognition — how we think, what we outsource, where human judgment ends and automated inference begins.

And we're doing it without the institutional frameworks we built — slowly, imperfectly, but deliberately — around every other powerful technology. Nuclear energy has the IAEA. Aviation has the FAA. Pharmaceuticals have clinical trials and approval processes that take years. AI has... terms of service and self-regulatory commitments from the companies deploying it.

I'm not saying regulation solves everything. I'm saying the gap between capability and governance has never been this wide this fast, and most of the public conversation is still debating whether AI art is plagiarism.


The Comfortable Catastrophe

What I keep coming back to — and this is the part that Vivy understood and that I find most disturbing — is that we could probably stop this, or at least slow it meaningfully. It would require enough people acting with enough urgency to override the economic incentives pushing in the other direction.

But we won't. Not because we're ignorant. Because we're comfortable.

The tools are useful. The convenience is real. The productivity gains are measurable. And the costs — the diffuse, long-term, hard-to-attribute costs — are someone else's problem, or some future version of our problem, or possibly not a problem at all and we're just catastrophizing.

That's the same logic that's been applied to every slow-moving crisis in living memory. And it has the same track record.

Vivy sits with her purpose — to make everyone happy with her singing — and tries to prevent a catastrophe she can't fully understand, caused by forces she can't fully control, in a world moving too fast for any single actor to redirect. In the end, the question isn't whether the technology was good or bad. It's whether the humans who built it and lived with it had the collective will to govern it.

They didn't.

We're still deciding.


Why I'm Writing This On A Developer Blog

Because the people building these systems are developers.

Not politicians. Not philosophers. Not ethicists — at least not primarily. The people making the architectural choices, writing the training pipelines, deploying the APIs, are people like me. People who got into this because they love building things. Who are genuinely excited by what's possible. Who are also, many of them, quietly uncomfortable with how fast this is all moving.

This isn't a call to action with a specific target. I don't have a clean solution. I have a feeling I got from watching a 2021 anime at 6 AM, which is that we are building the thing in Vivy, and we know it, and we're doing it anyway.

Maybe that's worth saying out loud.

Even if it's just one developer, writing on a blog no one reads yet, after a night of not enough sleep and too much anime.


Vivy: Fluorite Eye's Song is available on Crunchyroll. Watch it. Then sit with the discomfort.


Found this useful? Follow me on my blog for more.

Top comments (0)