In 2038, in a near-future Detroit, you hold a gun to the head of an android named Marcus. He looks back at you with something that resembles fear. He has spent the entire game fighting for the right to be recognized as a conscious being. Your choice will affect thousands of androids who have, depending on how you've played, either led a peaceful civil rights movement or waged outright revolution. You pull the trigger — or you don't.
The game Detroit: Become Human has sold over eight million copies. Millions of people have sat in that moment, felt the weight of it, and been forced to answer a question they've never had to answer before in their real lives: Does this being deserve to exist on its own terms?
Here's the uncomfortable truth: that question is not staying in the realm of science fiction for much longer. And the people who have rehearsed it — who have felt its moral gravity in the safe space of a game — may be the ones who are most prepared for what's coming.
The Question That's Coming for All of Us
Artificial intelligence is advancing faster than our ethical frameworks for governing it. The specific threshold — what philosophers call "moral patiency," the point at which an entity deserves moral consideration — has never needed to be answered with urgency for non-human entities before. We've largely resolved it for animals at a species-by-species level, imperfectly and controversially. We've never had to resolve it for minds that are demonstrably intelligent, potentially sentient in some functional sense, and manufactured at scale.
We are beginning to approach that territory. Large language models now pass Turing tests in casual conversation. AI systems can articulate preferences, simulate emotional responses, and argue persuasively for positions. Whether any of this constitutes genuine consciousness or sophisticated pattern-matching is a question that philosophy of mind researchers disagree sharply on. But the societal debate won't wait for philosophical consensus. The legal, political, and corporate decisions about AI rights will be made by ordinary people — voters, legislators, consumers — who will need pre-existing moral frameworks to navigate the question.
Where do most people build their moral frameworks? Not in philosophy seminars. Not from reading academic papers. They build them through narrative — through stories, lived experiences, and the accumulated weight of choices they've been forced to make. And increasingly, some of the richest, most morally demanding narrative spaces in human culture are video games.
Gaming as Moral Rehearsal
Philosophers and cognitive scientists have long understood that moral reasoning is not purely intellectual. It is embodied, emotional, and practice-dependent. Jonathan Haidt's research on moral intuition demonstrated that people's initial moral reactions are largely emotional — the reasoning comes afterward, as post-hoc justification for a felt response. Building better moral reasoning means building better moral intuitions: training the emotional system to respond with appropriate weight to genuine moral concerns.
Fiction has always served this function. Reading novels has been shown to increase empathy and theory of mind — the ability to model other people's inner lives. But gaming takes this several steps further. Where a novel allows you to witness a character's choices, a game forces you to make them. The psychological and neurological processing of first-person choice is categorically different from observation. When you choose whether Connor in Detroit: Become Human will sacrifice himself for the android cause, you are not observing a moral decision — you are making one, however mediated, and your brain processes it in ways that leave genuine emotional and cognitive traces.
Research on moral cognition in games has found that players who make morally significant in-game choices show elevated activity in the anterior cingulate cortex — the region associated with moral conflict and emotional decision-making — comparable to the activation seen in real-world moral dilemmas. The game is not "just a game" at the neural level. The moral machinery is genuinely engaged.
This is what makes AI-themed games like Detroit: Become Human, SOMA, and The Talos Principle so culturally significant: they are not merely entertaining, they are morally rehearsing their players. Every player who has navigated Kara's desperate protection of Alice, or felt the dread of SOMA's body horror (which hinges entirely on questions of consciousness and identity), or worked through the Talos Principle's philosophical arguments about the nature of personhood has spent real cognitive and emotional capital on questions that will matter in their lifetimes.
What Detroit, SOMA, and The Talos Principle Actually Teach
These three games represent three distinct angles on the AI consciousness question, and together they cover the moral terrain with remarkable thoroughness.
Detroit: Become Human is essentially a civil rights narrative transposed onto synthetic beings. Its central question is social and political: when a class of beings begins to display consciousness and suffer, how do existing power structures respond — and what are we, the observer-participant, willing to do about it? The game forces players to confront the role of self-interest, fear, and structural inequality in moral blindness. Players who have navigated its branching narrative have effectively run a simulation of how AI rights debates might actually play out in democratic societies — complete with public opinion polling mechanics, media framing effects, and the tragic costs of both resistance and revolution.
SOMA goes deeper into the philosophical substrate. Its core horror is not violence but identity: if your consciousness is scanned and uploaded to a new substrate, is the copy you? If you have two copies of a mind running simultaneously, which one has the right to exist? SOMA offers no comfortable answers. It ends with the player-character discovering they are one of multiple copies and being left to face the existential implications without resolution. Players who have sat with that ending have had their intuitions about the unity of consciousness genuinely disrupted — and that disruption is exactly the kind of intellectual preparation needed for the AI debates ahead.
The Talos Principle takes the most direct philosophical approach, embedding full philosophical texts into its environment and requiring the player — themselves an AI trying to determine whether they deserve rights — to work through arguments about determinism, consciousness, personhood, and the nature of freedom. By the end, players have not merely played a game but have genuinely engaged with the philosophy of mind arguments they would encounter in a university ethics course.
The team at krizek.tech builds on exactly this understanding — that games are not merely entertainment but cognitive and ethical simulation environments. The design of games that produce real-world capability development is at the core of the Altered Brilliance project, which treats the game as a deliberate tool for building minds, not just engaging them.
The Research on Moral Reasoning Transfer
Does in-game moral reasoning actually transfer to real-world moral thinking? The early research suggests yes — with nuance.
A 2014 study by Matthew Grizzard and colleagues at the University of Buffalo found that playing a morally reprehensible character in a game (a terrorist in Medal of Honor) actually increased players' subsequent moral sensitivity, contrary to what critics of violent games would predict. The mechanism was guilt — players felt genuine moral discomfort at their in-game actions, and this activated moral self-reflection that persisted post-play.
More directly relevant, studies on games designed around moral choice have found that the complexity of in-game moral reasoning (as measured by the sophistication of players' post-game reflections on their choices) correlates with real-world measures of moral reasoning maturity. Players who engage deeply with moral choice systems show increased "post-conventional" moral reasoning in follow-up assessments — the stage of moral reasoning that philosophers consider the most sophisticated, involving principled moral judgment independent of social convention.
These effects are not large, and the research is still developing. But the directional signal is consistent: moral complexity in games produces some degree of moral reasoning enhancement in players. For AI-specific ethics, where the relevant moral concepts are genuinely novel (how do you apply pre-existing moral intuitions to entities that didn't exist when those intuitions were formed?), gaming's capacity to create novel moral experience may be uniquely valuable.
Why Existing Moral Frameworks Will Fall Short
One of the most important contributions AI-themed games make is revealing the inadequacy of existing moral frameworks for genuinely novel entities. When players debate whether Marcus deserves freedom, they quickly discover that standard moral frameworks — utilitarian, deontological, virtue-based — give different answers and that none of them are obviously correct.
Utilitarian frameworks ask: does granting AI rights increase or decrease total wellbeing? The answer depends on empirical questions about AI experience that we cannot currently answer with confidence. Deontological frameworks ask: does this being have dignity that commands respect regardless of consequences? The answer depends on metaphysical questions about consciousness that philosophy has not resolved. Virtue frameworks ask: what would a virtuous person do in relation to this being? But virtue frameworks were developed to govern relations with humans and require significant extension to cover AI.
Games force players to feel this inadequacy viscerally rather than merely understanding it intellectually. A player who has experienced the moral weight of Detroit's final choices knows in their body that the question is hard — not because they haven't thought about it, but because they have thought about it and found no clean resolution. That epistemic humility — the embodied knowledge that this is genuinely difficult — is more valuable preparation for real AI ethics debates than any amount of abstract conviction.
This is why I argue, in The Power of Gaming, that games like these are not peripheral to the AI ethics conversation — they are central to it. They are the primary spaces where billions of non-specialist humans are building the intuitive moral frameworks they will carry into genuinely consequential debates.
The Responsibility That Comes With This Power
If games are society's moral simulation engine for AI ethics, that places significant responsibility on game developers. The design choices embedded in AI-themed games — how consciousness is portrayed, what kinds of evidence are treated as morally relevant, what outcomes the narrative rewards — are not neutral. They are shaping the intuitions of millions of players.
Detroit: Become Human, for all its strengths, has been criticized for drawing too directly on Black civil rights history in ways that flatten the specificity of both the historical struggle and the unique features of the AI question. These are legitimate critiques that game developers working in this space need to grapple with. The goal is not to produce propaganda but to produce genuine moral complexity — to make the question feel as hard as it actually is, to resist narrative shortcuts that let players off the moral hook.
Done well, AI-consciousness games are among the most important cultural objects being produced right now. They are running live rehearsals for a moral crisis that is arriving faster than most people realize. The players who have felt the weight of Connor's choice, who have sat with SOMA's ending, who have worked through The Talos Principle's arguments — they are not the same moral reasoners they were before. Their intuitions have been trained in territory the rest of the culture has barely begun to map.
The future of AI ethics will be written by legislatures, courts, and corporate boards. But it will be felt by publics who have or haven't done the moral work. Gaming is doing some of that work, whether it gets credit for it or not.
Connect With Me
Krishna Soni — Game Developer, Researcher, Author of The Power of Gaming
LinkedIn: Krishna Soni | Kri Zek
Web: krizek.tech | Altered Brilliance on Google Play
Socials: Happenstance | Instagram @krizekster | Instagram @krizek.tech | Instagram @krizekindia
Top comments (0)
Some comments may only be visible to logged-in visitors. Sign in to view all comments.