I came across a recent DeepMind discussion arguing that AI might be able to simulate intelligence really well, but not actually be conscious.
It got me thinking…
As these systems get better at sounding human, does it even matter whether there’s real “feeling” behind it?
Or is usefulness the only thing that matters in real-world applications?
Wondering how people here think about this. Are we heading toward something deeper, or just better simulations?
Top comments (0)