Seeing Google's Doodle tribute to Pac-Man recently sent me spiraling down a nostalgia rabbit hole. It reminded me of the 80s and scrounging for quarters and the disappointment over how long I could make each last—and of a MOOC I took in what now feels like ancient history in AI terms. A Gemini search of my old Gmail archives confirms it was UC Berkeley CS 188: Introduction to Artificial Intelligence via EdX, probably around 2012.
The year 2012. The year Obama won re-election, the Mayan calendar "ended," and Honey Boo Boo was somehow a cultural phenomenon. Siri was barely a year old—a killer feature for iPhone sales despite rarely understanding what you asked. "Deep learning" was a phrase you'd find only in academic papers, not headlines. A different era entirely.
That course required implementing classic algorithms like Greedy search and Minimax into Pac-Man. It was, honestly, a bit over my head at the time. But I slogged through it, and somewhere between debugging Python at 2 AM and watching my AI-controlled Pac-Man successfully evade ghosts, my mind was blown.
These "simple" algorithms—Minimax especially—produced results that felt... intelligent. Surprising. Alive. Before that course, I'd thought of code as scripts: wholly deterministic sequences that did exactly what you told them. But watching Minimax navigate a maze, weighing possibilities, anticipating ghost movements—that was something else entirely.
Why Revisit This Now?
We're deep into the post-deep-learning, post-generative AI era. Claude Code, Antigravity, and OpenAI Codex write and debug code autonomously. Suno-generated songs are topping Billboard's country charts. Google's Genie 2 conjures playable 3D worlds from a single image. So why bother with these dusty old algorithms from the 1940s and 1980s?
Because they're not obsolete. Far from it.
These algorithms are embedded everywhere: in your GPS routing, your game opponents, your thermostat, the trading bots on Wall Street. They run in parallel with the more advanced forms of AI now upon us—faster, smaller, and often more appropriate for the task at hand.
And honestly? In Pac-Man—a game of perfect information—we don't need generative AI to move our yellow hero. An untuned LLM could probably play Pac-Man, sure, but with noticeable latency and probably not with better performance than good old Minimax.
What's Different Now
Here's what has changed: the barrier to experimentation has collapsed completely.
Claude Opus 4.5 basically one-shotted a modified Pac-Man clone that lets users experiment with different AI algorithms—from the hilariously dumb (Random, Greedy) to the surprisingly sophisticated (A*, Minimax).
Seriously, we live in amazing times.
Afterwards, I modified it using GitHub Copilot with more Opus 4.5 help—adding a benchmarking mode and various quality-of-life improvements.
We'll explore each algorithm with animated GIFs as we go.
First, Let's Talk About the Ghosts
Before we dive into Pac-Man's AI options, we need to understand his adversaries. Because here's the thing: the ghosts are living in the 1980s.
Blinky, Pinky, Inky, and Clyde don't use fancy pathfinding or machine learning. They use the exact same logic Toru Iwatani and Shigeo Funaki coded in 1980—simple targeting rules that, combined, create the illusion of intelligent pursuit.

Photo: Torben Friedrich, CC BY-SA 4.0
The Ghost Ensemble
Each ghost has a distinct personality encoded in just a few lines of code:
// Blinky (Red) - Direct chase
blinky: function(map, ghostPos, pacmanPos) {
return this.moveToward(map, ghostPos, pacmanPos);
}
Blinky (Red) is pure Greedy algorithm—he targets Pac-Man's current position. Simple, relentless, predictable.
Pinky (Pink) is the ambusher—she targets 4 tiles ahead of Pac-Man, trying to cut him off:
// Pinky targets 4 tiles ahead of Pac-Man's direction
pinky: function(map, ghostPos, pacmanPos, pacmanDir) {
var delta = deltas[pacmanDir] || {x: 0, y: 0};
var target = {
x: pacmanPos.x + delta.x * 4,
y: pacmanPos.y + delta.y * 4
};
return this.moveToward(map, ghostPos, target);
}
Inky (Blue) uses vector math—he calculates his target as a reflection of Blinky across Pac-Man's position:
// Inky - Target is reflection of Blinky across Pac-Man
inky: function(map, ghostPos, pacmanPos, blinkyPos) {
var target = {
x: pacmanPos.x + (pacmanPos.x - blinkyPos.x),
y: pacmanPos.y + (pacmanPos.y - blinkyPos.y)
};
return this.moveToward(map, ghostPos, target);
}
Clyde (Orange) is the coward—he chases when far away but retreats to his corner when he gets within 8 tiles:
// Clyde - Chase if far, scatter if close
clyde: function(map, ghostPos, pacmanPos) {
var dist = manhattanDistance(toGrid(ghostPos), toGrid(pacmanPos));
if (dist > 8) {
return this.moveToward(map, ghostPos, pacmanPos);
} else {
// Scatter to corner
return this.moveToward(map, ghostPos, {x: 0, y: 210});
}
}
The Math Glitch That Became Canon
Fun fact: there's a bug in Pinky's original code. When Pac-Man faces UP, an overflow error adds an extra offset to the LEFT, making her targeting slightly... twitchy. Namco discovered this but kept it—it made Pinky feel more unpredictable, more alive. Sometimes bugs are features.
Why This Matters for Our AI
The ensemble of ghost behaviors is better than any individual ghost. They flank, they scatter, they reconverge. But they're also predictable in their unpredictability—which means our Minimax algorithm, which assumes optimal adversarial play, is actually playing against a weaker opponent than it thinks. More on that irony later.
Benchmarking the Algorithms
Let's see how our AI options actually perform. Run the benchmark mode yourself—cuz it's fun to watch 400 game starts play out in a minute or so! But here's what my results looked like:
| Rank | Algorithm | Survival | Avg Score | Ghosts Eaten | Decision Time |
|---|---|---|---|---|---|
| 🥇 | Minimax | 31.9s | 1687 | 5.5 | 0.04ms |
| 🥈 | A* Pathfinding | 17.5s | 1114 | 3.4 | 0.01ms |
| 🥉 | Random | 7.1s | 368 | 0.6 | 0.00ms |
| 4 | Greedy | 1.0s | 70 | 0.0 | 0.01ms |
Wait—Random beats Greedy? And by a lot?
Let's dig into each algorithm, from worst to best. Fair warning: John von Neumann is going to pop up a lot. So is the Cold War.
4th Place: Greedy — The Algorithm That Should Know Better
It's no good being greedy!
The History
The term "Greedy algorithm" emerged from the optimization boom of the 1950s and 60s, solidified by work on Matroids by Jack Edmonds in the 1970s. The concept is beautifully simple: always take the locally optimal choice and hope it leads to a globally optimal solution.
For some problems (like Dijkstra's shortest path), greedy works perfectly. For others (like Pac-Man survival), it's a disaster.
Why It Fails Spectacularly
Our Greedy implementation does exactly one thing: find the nearest pellet and move toward it.
greedy: function(map, pacmanPos, lastMove) {
var nearest = findNearestPellet(map, pacmanGrid);
// ...find the move that gets us closest to that pellet
var dist = manhattanDistance(newPos, nearest);
var score = -dist;
}
Notice what's missing? Any awareness of ghosts whatsoever.
Greedy Pac-Man will cheerfully walk straight into Blinky's open maw if there's a pellet on the other side. It's the algorithmic equivalent of texting while crossing the street.

Greedy's fatal flaw: pellet tunnel vision
Average survival: ~1 second. Ouch.
3rd Place: Random — The Drunken Master

Every direction is equally valid. That's the whole strategy.
The Ancient History
Random number generation is older than computing itself. Dice from 2400 BC. I Ching stalks from 1100 BC China. Heated tortoise shells whose cracks were interpreted as divine messages.
For millennia, generating randomness required physical hardware: dice, coins, shuffled cards. But computers are deterministic machines—how do you generate chaos from clockwork?
Enter Von Neumann (First Appearance!)
During the Manhattan Project, Stanislaw Ulam invented the Monte Carlo method while playing solitaire during recovery from brain surgery. (The best algorithms are born from boredom.) He and von Neumann needed random numbers—lots of them—to simulate neutron diffusion in fissile material.
Von Neumann's solution was the Middle-Square Method (c. 1946):
- Take a seed number
- Square it
- Extract the middle digits
- That's your random number (and your new seed)
Von Neumann famously acknowledged the philosophical absurdity: "Anyone who considers arithmetical methods of producing random digits is, of course, in a state of sin."
But sinful or not, it worked well enough to help design the hydrogen bomb.
Why Random Beats Greedy
random: function(map, pacmanPos, lastMove) {
var moves = getValidMoves(map, pacmanPos);
var bestMove = moves[0];
var bestScore = -Infinity;
for (var i = 0; i < moves.length; i++) {
var move = moves[i];
var score = Math.random() * 10 - getOscillationPenalty(pacmanPos, move, lastMove);
if (score > bestScore) {
bestScore = score;
bestMove = move;
}
}
return bestMove;
}
Random has one crucial advantage over Greedy: it doesn't walk into the same trap twice.
By moving unpredictably, Random occasionally stumbles away from danger. It picks up pellets by accident. It sometimes, by pure chance, threads through a gap between ghosts that Greedy would have sprinted directly into.
It's the Drunken Master of algorithms—surviving not through skill but through the sheer improbability of its movements.
Average survival: ~7 seconds. Seven times better than Greedy!
2nd Place: A* — Shakey's Gift to Gaming
The Robot That Started It All
In the late 1960s, at Stanford Research Institute, a robot named Shakey wobbled through rooms full of blocks and ramps—the first mobile robot capable of reasoning about its actions. It needed to navigate without crashing, but Dijkstra's algorithm (1956) explored in all directions like spilling water. Shakey needed to head toward its goal.

Photo: SRI International, CC BY-SA 3.0
Peter Hart, Nils Nilsson, and Bertram Raphael solved this by adding a heuristic—an estimate of remaining distance: f(n) = g(n) + h(n), balancing known cost with estimated future cost.
Smart A* in Pac-Man
Our A* implementation isn't just vanilla pathfinding—it has strategic goal selection:
// Determine goal priority:
// 1. If ghosts are edible -> hunt nearest ghost
// 2. If dangerous ghost is close -> go for power pellet
// 3. Otherwise -> collect regular pellets
if (edibleGhosts.length > 0) {
// Hunt nearest edible ghost
goalGrid = nearestGhost;
} else if (closestDangerDist < 8) {
// Ghost is close! Get a power pellet
goalGrid = findNearestPowerPellet(map, pacmanGrid);
} else {
// Safe - collect regular pellets
goalGrid = findNearestRegularPellet(map, pacmanGrid);
}
It also incorporates danger costs—paths near ghosts have higher penalties. A* doesn't just find the shortest path—it finds the safest short path.

Methodical, efficient, but not paranoid enough
Average survival: ~17.5 seconds. Solid, reliable performance.
1st Place: Minimax — The Cold War Logic of a Paranoid Yellow Circle

Methodical, efficient, but not paranoid enough
Poker, Paranoia, and the Bomb
To understand Minimax, you need to understand John von Neumann.
Born in Budapest in 1903, von Neumann was a prodigy among prodigies—part of the legendary "Hungarian Martians" who reshaped American science. At Princeton's Institute for Advanced Study, he became fascinated with a question that standard probability couldn't answer:
How do you optimize against an opponent who's actively trying to beat you?
Roulette wheels don't care if you lose. But poker players bluff. They hide information. They maximize their gain at your expense.
In 1928, von Neumann published "Theory of Parlor Games," proving the Minimax theorem: in a zero-sum game with perfect information, there exists a strategy that minimizes your maximum possible loss.
In other words: assume your opponent is a genius. Assume they'll always make the move that hurts you most. Then pick the move that leaves you least hurt in that worst case.
From Poker to Armageddon
Von Neumann wasn't just an academic—he worked on the Manhattan Project and later served on the Atomic Energy Commission. His Minimax philosophy permeated Cold War strategy, underpinning Mutually Assured Destruction (MAD): create a situation where the opponent's "maximum loss" is so unacceptable they're forced into strategies that minimize it.
Minimax in Pac-Man
Our implementation builds a game tree to a depth of 6, alternating between MAX layers (Pac-Man maximizes) and MIN layers (ghosts minimize). The evaluation function weighs ghost proximity (heavily penalized), edible ghost hunting (heavily rewarded), and power pellet value (boosted when threatened). Alpha-beta pruning keeps it fast enough for real-time play.
The Irony of Perfection
Here's the beautiful irony: Minimax assumes the ghosts are perfect adversaries. It prepares for opponents who will always make the optimal move against Pac-Man.
But the ghosts aren't perfect. They're not even good. They're hard-wired, animatronic patterns from 1980—Blinky's relentless chase, Pinky's buggy ambush, Inky's confusing flanks, Clyde's cowardly retreats.
Minimax is playing 4D chess against opponents who are playing checkers.
The result? Minimax often seems overly cautious—dodging threats that aren't really threats, optimizing against a genius adversary that doesn't exist. It's the Cold War logic of the bomb applied to a yellow circle eating dots: paranoid, calculating, and ultimately... over-prepared for an apocalypse that was never coming.

Paranoid, calculating, victorious—assuming an optimal opponent!
Average survival: ~32 seconds. More than double A*'s performance.
The Genealogy of Code
| Algorithm | Era | Origin Story | Core Philosophy |
|---|---|---|---|
| Greedy | 1950s-60s | Optimization research, Dijkstra, Kruskal, Prim | "Take the best immediate option" |
| Random | Ancient / 1946 | Dice, I Ching → Monte Carlo, von Neumann's Middle-Square | "When in doubt, roll the dice" |
| A* | 1968 | Shakey the Robot at SRI | "Balance known cost with estimated future" |
| Minimax | 1928 / Cold War | Von Neumann's poker games → nuclear strategy | "Assume the worst, prepare accordingly" |
These algorithms aren't just code—they're artifacts of human history. Von Neumann's paranoia about Soviet intentions lives on in every game tree search. Ulam's boredom during convalescence echoes in every random number generator. Shakey's wobbling navigation through a room of blocks enabled every pathfinding algorithm in every video game since.
Try It Yourself
Run the benchmark. Watch Random stumble to third place. Watch Greedy die immediately. Watch A* navigate with mechanical efficiency. Watch Minimax dominate with Cold War paranoia.
Want to go further? The code is begging for improvements:
- Deepen the Minimax search beyond 6 plies—how much better can it get?
- Implement Expectimax, which models the ghosts as probabilistic rather than optimal adversaries
- Encode actual ghost behaviors into the evaluation function—if you know Clyde retreats at 8 tiles, exploit it
- A hundred lines of well-tuned algorithm could absolutely wipe the floor with these 1980s ghosts
Then think about this: I built this entire demo in an afternoon, mostly through conversation with an LLM. The barrier to experimenting with classic computer science has never been lower.
We stand on the shoulders of giants—von Neumann, Ulam, Dijkstra, Hart, Nilsson, Raphael, Iwatani. Their algorithms, forged in the crucible of war and the whimsy of arcade entertainment, are now accessible to anyone with a browser.
Use them. Learn from them. Build on them.
A Final Note on Quarters
Remember those quarters I mentioned scrounging for in the 80s? Here's where they went.

Photo: Official GDC, CC BY 2.0
Toru Iwatani, the designer who gave us Pac-Man, never received more than his regular Namco salary for creating one of the most successful games in history. No royalties, no bonus, no equity stake. Just a paycheck.
And Namco's revenue model was gloriously simple: they sold arcade cabinets. Whole machines, turnkey systems. A Pac-Man cabinet cost around $2,500 in 1980 (about $9,500 today)—at a busy arcade, it could pay for itself in a month. The buyer paid Namco upfront, plugged the cabinet in, and kept every quarter that dropped. No subscriptions, no microtransactions, no licensing deals (those came later). Just hardware for cash.
A simpler era—in algorithms and in business.
Credits
- Original Pac-Man game design: Toru Iwatani, Namco (1980)
- Ghost AI logic: Shigeo Funaki, based on Iwatani's personality concepts
- Base JavaScript Pac-Man engine: Adapted from daleharvey/pacman
- AI implementations and modifications: Built with Claude Opus 4.5 and GitHub Copilot
- Historical research: Google Deep Research, compiled from various sources on von Neumann, the Manhattan Project, SRI's Shakey project, and Namco's development history
- Diagrams: Generated with Gemini Nano Banana Pro




Top comments (0)