DEV Community

Cover image for The conscious machine
Craig Nicol (he/him)
Craig Nicol (he/him)

Posted on • Originally published at craignicol.wordpress.com on

The conscious machine

This is a fantastic explainer of the threats and risks, and opportunities of AI. Thinking about the nature of consciousness. Can we ever say truly what a machine consciousness is, or how it feels?

Max Tegmark – When Our Machines Are Smarter Than Us – Clear+Vivid with Alan Alda

Up until now, we’ve been smarter than our tools. But that might change drastically sooner than we know. Isn’t it time to think about that?

As a white man, I have no idea how it feels to walk this world in darker skin. I can understand fear, but not the constant fear of being stopped by police, of watching my back.

I can understand what is happening, and fight to change it, but I’ll never understand how it feels to be in that position. Equally, a machine will never be able to understand how that feels, although it may be able to approximate the behaviours expected of someone who does.

AI is being built with a Western and a Chinese perspective. We cannot understand what a conscious machine will be like, or how it feels, but we can understand the environment it is created in.

In the USA and in China, it’s an environment where the ruling party actively dehumanise sections of the community, particularly Muslims at the moment, and black skin for centuries.

That environment is the context under which these consciousnesses are created. And whether the engineers agree with the government bias or not, their data will always be informed by it, especially where that AI is trained on historical data, news or social media.

How deeply will that consciousness embed the ideas of division and hatred, that one group is better than another, that one group is less than human? And if that’s its world view, what decisions will it make?

And it’s not theoretical. We know machine learning algorithms routinely discriminate against black skin, non-European names, female job applicants, and more.

Without active anti-discrimination training, all these algorithms will build these white supremacist biases in, and that will be their world-view. Their water will be division and discrimination and they won’t be able to see it.

Because those who train them are unable to see it.

Machines don’t have to be smart to be dangerous. But a machine that embeds that bias into its own world-view can do it opaquely, just as systemic racism doesn’t have to use discriminatory language to prevent black kids from getting to university.

Just one nudge after another to say “you don’t fit”, “this isn’t your world”, “try something else”, “behave more white”, “look less black”. (Why I’m No Longer Talking To White People About Race has a great section on a hypothetical black kid growing up and these barriers)

If you’re not actively building anti-discrimination into your AI, you are perpetuating white supremacy.

You are supporting fascism.

How will you be anti-racist today?

Top comments (0)