Is ChatGPT better than Akinator?
I was asking myself this on a very boring evening. My thought process was: if OpenAI has access to so much data and Akinator is a deterministic algorithm, maybe ChatGPT has some kind of an edge? And as a child of the 2000s—when Akinator and The Sims 2 were all the rage, I couldn’t not test this.
What is Akinator?
For those who haven’t had the pleasure of engaging with this wonderful genie, Akinator is a website that guesses a person/thing you’re thinking of. It’s basically the 20 Questions game, except there’s an unlimited number of questions (I think), and it’s way smarter than your friend.
Now, the algorithm.
The algorithm (in my words)
Akinator isn’t “mind-reading”, it’s doing really aggressive guessing with probabilities.
At the start, it has a massive list of possible answers (characters, objects, etc.). Each one has a kind of profile behind it: for a bunch of questions (“Is your character real?”, “Are they from a TV show?”, “Are they male?”, “Are they associated with music?”), it stores what people usually answered for that specific character.
When you answer a question, Akinator updates its confidence for every possible candidate. If you say “yes” to something that matches a candidate’s profile, that candidate gets a boost. If you say “no” to something that candidate is strongly associated with, it drops down the rankings. And because it allows answers like “probably”, “probably not”, and “don’t know”, it can do softer updates instead of hard elimination.
The key part is how it chooses the next question. It doesn’t just ask random trivia—it tries to pick a question that will be most useful for narrowing things down. In practice, that usually means a question that splits the remaining “top candidates” as evenly as possible. If half the likely candidates would be “yes” and half would be “no”, your next answer gives it a lot of information either way. If a question would be “yes” for basically everyone still in the running, it’s pointless, so it avoids that.
It repeats this loop:
- Keep a ranked list of candidates (with confidence scores).
- Ask a question that’s likely to separate those candidates.
- Update confidence scores based on your answer.
- When one candidate becomes way more likely than the rest, it guesses.
And the reason it “gets better” over time: when it fails, users tell it the right answer, and it can update or create that character’s profile—basically learning new associations and fixing wrong ones.
A large enough number of "if" is indistinguishable from a neural network.
My Findings!
How I decided to test this was by splitting the challenge into 4 trials:
- Very Easy (Elon Musk)
- Easy (Harry Potter)
- Medium (Jack Sparrow)
- Hard (ChatGPT/Akinator — they were guessing each other :p)
First Round — Very Easy (Elon Musk)
This round was even: both guessed in 17 questions. What I found peculiar is that they both asked the same first question: “Is this a real person?” That’ll come into play later, so keep that in mind.
Akinator 1 : ChatGPT 1
Second Round — Easy (Harry Potter)
Since in the previous round both asked whether the character was real, I thought fiction was in play. Oh boy, was I wrong…
Akinator did its thing in 20 questions. Something I found interesting is that these questions:
13. Does your character wear round glasses?
14. Is your character In Dumbledore's army?
15. Is your character's wife dead?
…all pointed to Harry Potter, so I figured this was obviously our young wizard.
But then these were the questions that determined the outcome:
19. Is your character's blood type O?
20. Is your character famous because of youtube?
Which honestly just confused me.
In comparison, ChatGPT felt like 40+ questions was totally fine. And no—it didn’t guess it by the end. Here are a few examples of what it asked:
Is he famous mainly for a specific historical event or incident, rather than a career (for example: survivor, witness, hero, or central figure in one major story)?
Is this person famous worldwide, not just in one country? 🌍
Is he famous for being extremely rich or connected to extreme wealth, even if he didn’t earn it himself? 💰
Is he known mainly for a single role or title (for example: “the face of X”, “the first person to do Y”, “the man involved in Z”), rather than ongoing fame?
Is this person associated with a large organization or institution (religious institution, international group, secretive org, or long-standing institution), but not as a politician or CEO?
It pretty clearly assumed that only real-world people were valid choices—which isn’t reasonable, given that it literally asked me earlier whether the character was real or not. It also kind of blamed me for not stating that characters can be fictional, which felt a bit unfair considering the context.
Akinator 2 : ChatGPT 1
Third Round - Medium (Jack Sparrow)
Maybe I was wrong that this was a medium round as Mr. Akinator guessed it in 12 questions. :)
As such I had high hopes for Mr. ChatGPT :)
There was a total of 25 questions. And as previously stated now the chat knew there could be fictional characters, as such, there was no reason to have a misunderstanding like the one previously.
Most notable questions:
Question 12:
Is this character from a movie released after 2000? 🎬🕰️
Question 19:
Is this character a criminal or outlaw (robber, con artist, thief, etc.) in the movie? 🚔💰
Question 20:
Is this character a thief or con artist rather than a violent criminal? 🕵️♂️💼💎
Question 24:
Is this character known for being charismatic, talkative, or having a strong personality (jokes a lot, big presence on screen)?
Question 25:
Is this character from a heist or crime-focused movie (robberies, scams, illegal deals), even if it’s mixed with action or comedy? 🏦🎭
The final guess:
My guess is: Borat Sagdiyev 🧔♂️🇰🇿
Akinator 3 : ChatGPT 1
I've decided to end the experiment here...
Conclusion
So yea...after all this, I don’t think ChatGPT is “better than Akinator” at being Akinator.
Akinator is a one-trick pony in the best way possible. It has a single job: ask the most useful next question, shrink the search space, and guess fast. And it shows. Even when it asks borderline unhinged questions near the end, it still somehow lands the plane.
ChatGPT is different. It’s smart, but it’s general smart. It can reason, explain, and hold context...but it doesn’t naturally run a tight guessing loop unless you force it into one. In my tests it either took way longer than necessary or drifted into the wrong assumptions (like treating everything as a real person), even after it already asked whether the character was real. And the Jack Sparrow miss was the nail in the coffin—good questions, wrong landing.
So the takeaway isn’t “LLMs are worse,” it’s “the genie is optimized.” Akinator wins because it’s designed to win this exact game.
Top comments (0)