Below is a comment I left on @sylwia-lask's post If AI Existed in 2011, Would We Still Have the Modern Web?, which I thought warranted its own post.
Disclaimer on this post: I'm a general cynic with very strong decolonial tendencies. As such, I'm very cynical about "ai" in general, and especially how humanity and conversely capitalism will use it. Please keep that in mind as you read the rest of this post below.
Disclaimer: I'm not a fan of "ai" because of a lot of reasons, but most of all because it's essentially really advanced T9/Markov chains on steroids and because a lot of people who drink the koolaid seem to forgo critical thinking. So if there's disdain in my tone, you're absolutely right. However, this disclaimer is relevant and I am going to respond in-depth.
You touch on exactly the core of "ai".
Because the inconvenient truth is that "ai" has a knowledge horizon problem, and much like how the human brain works, its inherent internal statistical model and inference models merely improve on statistical connections between the internal vectors.
In human terms and as you already mentioned, "ai" favours the most common/valued paths.
Consequently, when an "ai" engine does not have a common path, it will try to extrapolate a solution, which leads to modelling outputs that resemble existing acceptable paths/patterns.
I.e. "ai" hallucinates!
One of the other challenges here is that the way we've trained "ai" implementations/models, has always been focused around externally controlled input. Or in other words, we pre-chewed the input for the machine. We gave it baby food. And because of that, to my best understanding at least, "ai" does not have a fundamental understanding of the more relationships between different concepts.
"ai" knows how to express a mathematical function and "ai" can (re)produce functions but does not have a more fundamental understanding of what E=MC2 than you or I do beyond that it statistically has to do with the speed of light.
Now... With all of that said...
I think the "ai" companies let the "genie" out of the box too soon. Because money needed to be made. I don't think humanity was ready for "ai".
I think that if current day "ai" was transported to 2010 or even 2000, it would've stifled progress. Because like you and others said... It just optimizes what exists and without money driving the need to progress and innovate, I don't believe that creativity would have come to this point.
Maybe some hackers would... But that lands me to the point of my disclaimer...
The magic thinking machines that is "ai" also seems to inhibit critical thinking.
If "ai" was more mature, then maybe. But looking at how management is already wooed by current "ai" and looking at the current state of "civilization", I have a really hard time seeing how "ai" would make a positive change at any point in the past.
Because after all, doesn't programming distill to a lot of below but repeated ad-nauseum?
result = do_thing
if result == condition
do_that_thing
else if result == condition
do_other_thing
else
do_some_thing
Which drives home that "ai" as really advanced T9 is great at repeating things.
But "ai" can't synthesize that.
So we need to create content for "ai" to ingest, but if it can't synthesize and distill the underlying fundamentals, progress will always be restricted by our dependency on "ai".
And colonialism/capitalism says that progress isn't as important as making money.
So I have a hard time believing that we'd do better 10 or 20 years before than we do currently.
There's just more content "ai" could be trained on right now.
Just my 2 cents.
Top comments (0)