Systems like BERT and GPT-2 do not have consciousness. They don't understand language in a grounded way. They don't have keep track of information between different generated utterances. They don't "know" that down is the opposite of up or that three is more than two or that a child is a kind of human.
If these are the requirements for consciousness, you've excluded most animals and some retarded humans, which I think that most people would consider to be conscious in some regard.
Which means that you're probably talking about something quite different to consciousness.
I suggest thinking about where consciousness arises (in social and hunting animals) and the kinds of problem that consciousness solves (predicting others, and explaining oneself to others), in order to think about what consciousness actually is.
These are four things that are true about language models (they are not conscious AND they are not grounded) rather than me trying to define consciousness using three bullet points. :)
If these are the requirements for consciousness, you've excluded most animals and some retarded humans, which I think that most people would consider to be conscious in some regard.
Which means that you're probably talking about something quite different to consciousness.
I suggest thinking about where consciousness arises (in social and hunting animals) and the kinds of problem that consciousness solves (predicting others, and explaining oneself to others), in order to think about what consciousness actually is.
These are four things that are true about language models (they are not conscious AND they are not grounded) rather than me trying to define consciousness using three bullet points. :)
It's always easy to claim that things lack or possess undefined characteristics.