It’s very easy to fool humans. We love to fill in gaps.
We see patterns and assume intelligence. We saw regular pulses from stars and called them Little Green Men. We see faces in cars and washing machines. Our brains fill in gaps when our eyes saccade.
And when we see a string of text that is coherent and reinforces our bias, we think that what produced it must be intelligent.
The Chinese Room is a thought experiment about artificial intelligence. The set-up is simple. Imagine there is a room with a letterbox and a massive stack of books. A human who does not speak Chinese is put in the room and told that for every Chinese message posted through the letterbox, they look up the symbols in the stack of books, copy out the response symbols, and post them back. Any Chinese speaker posting messages into the box would think they are talking to someone who understands Chinese, but the person in the box is just looking it up.
A large language model works the same way; it breaks the message into tokens, looks up the tokens in its database, and then looks up what the most likely next token will be based on the billions of tokens that were used to train it. And then adds a bit of randomness by sometimes choosing the 2nd or 3rd most likely.
The LLM then posts the response back, and the human outside the box starts to think that the box is intelligent. And some psychologists look at it and say it’s a prediction machine, and so is the human brain, so they’re the same thing.
But it’s not thinking, even if the chat box says it is. It’s not reasoning, even if the model has reasoning in its name. It’s just figuring out the next word. Sometimes it will figure out the next 10 words, like a Chess Grandmaster scanning the future moves, but it’s not got motivation. It’s not got self-reflection. It does not feel pain or remorse; it just keeps playing its own version of Scrabble until it draws a blank or runs out of tiles.
The thinking didn’t happen in the box. It happened in all the brains in all the people who wrote the words to put in the box, and it happens in all the brains posting things into the box, longing for connection or looking for a minion to help them steal the moon.
Humans love to find intelligence and faces in places where they don’t exist, because our brains are wired to find potential mates, friends and foes. A statistical model is none of those.
Top comments (0)