DEV Community

Rachael Tatman
Rachael Tatman

Posted on

Are BERT and other large language models conscious?

NLP models that produce fluent-sounding text are coming into vogue again. (I say again because systems like Eliza and Markov chain text generators have been around for decades.) A set of new systems trained using the transformer deep learning architecture including BERT and GPT-2 have been setting new high water marks across various NLP leaderboards. It's an exciting time!

The problem is that, along with that excitement, we see an increasing desire to assign human-like cognition to text generated by NLP systems. Take this tweet for example:

First I want to make it very clear that I'm not trying to dunk on Tyler here. I've seen similar questions asked by lots of very smart folks and I think it's a perfectly reasonable thing to wonder about.

I genuinely understand the desire to ascribe consciousness to ML systems. After all, folks have been hollering about AGI and the singularity for years. And humans have a deep seated desire to see human qualities in non-human things.

That said, this very natural tendency, compounded by the fever pitch hype cycle and cherry picked example could lead a casual observer of the field to genuinely start wondering: are these systems genuinely showing patterns of humanlike thought?

Short answer: no.

Systems like BERT and GPT-2 do not have consciousness. They don't understand language in a grounded way. They don't have keep track of information between different generated utterances. They don't "know" that down is the opposite of up or that three is more than two or that a child is a kind of human.

What they do have is highly, highly optimized models of (usually English) words that humans tend to use together in specific orders. In other words, they're very good statistical approximations of patterns of language use. This ACL paper has some good experimental results that provide evidence for this as well as some of the accompanying drawbacks.

Why is this important?

On the one hand, it's not! BERT, GPT-2 et al aren't designed to be grounded language models or include knowledge about relationships between entities. There's absolutely nothing in the algorithm design or training data to ensure that text generated by these models is factual. This isn't a drawback of the models: it's just not in scope.

On the other hand, it's very important that users with these models understand that this is the case. These are language models and, like all language models, are designed to be components in larger NLP systems rather than an entire system in themselves.

So, while it's definitely fun to play around with text generated by these models, it's akin to interacting with a parrot that's been taught to mimic your ringtone. It may sound like a phone, but it has none of the other features that make it one.

Oldest comments (5)

Collapse
 
ferricoxide profile image
Thomas H Jones II

If you're asking the question, "are these nascent consciousnesses," you probably aren't reading much in the areas of consciousness research or even layman-level articles on, say, humans' sense of musicality vice other animals'.

Seems like things that are so "second nature" to people are some of the most complex things we do/understand and, by extension, that much harder for people to understand "why haven't we been able to reproduce that 'simple' behavior, synthetically."

It's really fun sitting in on an ML presentation being done by a vendor's non-technical staff and asking them capabilities questions. At the end you find yourself thinking, "you're involved in trying to sell this stuff, how can you be giving me blank stares on so many of my questions?".

Of course, you need that fun as the creeping-ickiness of the currently-available capabilities dawns on you as you think past just the use case they're selling you on. I mean, Amazon, Azure, etc. all seem to be selling some subset of China's "social credit" system to anyone with a credit card or a purchase order. :p

Collapse
 
markoshiva profile image
Marko Shiva

I guess that those people who made that training data set just didn't put anything as a response to consciousness problem. Maybe intentionally.
Also the thing about ML is that from my point of view DNN is very resource hungry and I think that cognitive.ai is better answer to idea of conscious machine then just a fusion of big number of "narrow ai", specific to a problem, networks.
Either could lead us in that direction just I think that cognitive.ai is far less resource hungry then a fusion of thousands if not millions of ML models.

Collapse
 
pentacular profile image
pentacular

Systems like BERT and GPT-2 do not have consciousness. They don't understand language in a grounded way. They don't have keep track of information between different generated utterances. They don't "know" that down is the opposite of up or that three is more than two or that a child is a kind of human.

If these are the requirements for consciousness, you've excluded most animals and some retarded humans, which I think that most people would consider to be conscious in some regard.

Which means that you're probably talking about something quite different to consciousness.

I suggest thinking about where consciousness arises (in social and hunting animals) and the kinds of problem that consciousness solves (predicting others, and explaining oneself to others), in order to think about what consciousness actually is.

Collapse
 
rctatman profile image
Rachael Tatman

These are four things that are true about language models (they are not conscious AND they are not grounded) rather than me trying to define consciousness using three bullet points. :)

Collapse
 
pentacular profile image
pentacular

It's always easy to claim that things lack or possess undefined characteristics.