Does anyone think AI will truly ever be self-aware?

twitter logo github logo ・1 min read

As I have learned more and more about programming, I wonder if this will ever be a thing. It seems so far away. I have no doubt there will be highly advanced and "convincible" AI, but I wonder if they will be sentient, to the degree that we humans are.

twitter logo DISCUSS (6)
markdown guide

I'm going to go out on a limb here and say that there already are examples of self-aware, and even sapient, AI's out there, it's just that most people don't really recognize them as such.

For example, look at recent advances in collision avoidance in industrial robotics: To be able to avoid colisions like that, the bots have a 'simulated' proprioceptive sense (they know exactly where each part of them is in relation to the surrounding environment). Proprioception, whether 'simulated' (and I contend that this example is not simulated, it's just different from how animals do it) or not, it implies some form of self-awareness because you have to be able to differentiate between yourself and your environment for proprioception to be useful at all. The same example can also be used as a demonstration of sapience as well, the robot is demonstrably reasoning about the future, even if in a very limited fashion.

The biggest issue I see with recognition here is the lack of free will. Many people equate not having free will with not being self-aware or sapient, but that's really just an arbitrary (and arguably wrong) assumption based on the fact that you can't, by definition, have free will without self-awareness. People often forget that just because A implies B does not mean that B implies A (and this is just a tiny example of it). Even beyond that the same arguments that get brought up by people claiming humans have superior intelligence to animals can be applied as well.


Frankly, no. If there is one thing that programming has taught me it's that computers are tools that only do what they are programmed to do. We sometimes tell them to do really complex things and can observe "emergent" and unexpected behavior (including bugs). But my experience so far tells me that no combination of machine instructions could possibly be sentient.

All we continue to do is improve our simulation of intelligence. And I definitely think we will see machine learning used to make more impactful decisions as time goes on. But any post-apocalyptic robot future will likely be attributed to bugs (or hacks) rather than the malice of a sentient computer.


It depends on what you consider self-awareness. An agent that is able to distinguish between other things and it self may be considered self-aware. And I don't think that would be hard to do.

If you mean like us, I think right now it is technically possible but not feasible in terms of resources. Maybe it is, but it won't worth it.

We have the pieces of software needed to do it, we are close to have data enough, but we don't have the hardware to run it on realtime efficiently.

Also, it seems dangerous (and many experts agree that it actually is). I don't think we'll see it any soon.


I think yes, but when we find an economic way to produce enough energy to power a huge computer consumption.

Just now, see what is the consumption of a single server serving hundreds of static html pages. Now compare with an algorithm that learns to learn, and with pre guidance to develop senses and feeling.

As my peers said, the biggest challenge will be hardware.


Unless it becomes something more than glorified image recognition system I wouldn't give it much thought. It's just another buzzword and due to advances in hardware, some complex algorithms can now be performed in real-time and I'm guessing that's all there is to it. We don't understand how our own brains work and I have also recently read that digital brain project what was supposed to map the human brain has failed. Sentence is a mystery to us, and unless we are given AI constructed by aliens or gets sent back through time by evolved humans, AI will be nothing more than advanced call center bot and advertising facilitator.

This is just my opinion, I am not involved in AI or ML.


I think self-awareness partially implies that you can examine and modify your body or mind, so in hardware that means that the computer itself would need to be able to rewire it's circuitry on its own. We barely have that step with something like biological computers or FPGAs.

Another thing this implies is that if the computer can change itself, but it chooses not to we wouldn't be able to tell if it was self-aware. Which means in order for us to be sure we made a self-aware AI, that AI would need some goal that it would need to pursue and feedback on if the changes it made to itself helped it toward the goal. Humans have a serotonin feedback loop that encourages us to do things that make us feel good. We're pre-wired for survival, via natural selection and evolution.

So we then need to design the system that the AI would use as 'serotonin'. Which is weird to even think about, because we have biological bodies that react to chemistry, but a computer is not biological. So whatever that feedback is, it would have to reinforce the goal (i.e. giving the computer more resources, or making itself faster and faster or something). So who knows.

Classic DEV Post from May 26 '18

How To implement Lazy Loading in Angular

How To implement Lazy Loading in Angular

Brian Barbour profile image
Software Engineer at Community Brands and Javascript enthusiast.