So we had our fair share of an AI hype cycle now. It's still something we hear about every day and companies are out there trying to out-innovate each other, even if only in the marketing department.
Interestingly most generative AI products - which are all the rage these days - focus on taking away the tasks which brought joy and were underappreciated, underpaid and exploited anyways. The actual writing when writing, the actual craft when creating art, and actually playing music, when creating music. What's left when you take the skills and craft away from art? A blunt shell. A quick dopamine hit for a few minutes - if even that. Maybe we should stop for a moment and ask "Why?". Why are we doing this? What are we winning?
"Productivity" I hear some scream. "I'm 10 times more productive thanks to AI" they say. Even if this is true, what for?
Do we need 10 times the articles to read? Do we really need 10 times the songs to listen to? Do we need 10 times the images to look at? Do we need 10 times the emails, spam, websites or even code? And if not, what does the productivity gain do? Destroy 10 times the jobs? Kill 10 times the joy of creating art? I don't know the answer, but I never heard one from those who claim to be 10 times (or even 100 times) more productive.
Another thing: I don't believe those tools are really that powerful. I see how easy it is to get to the limits of generative AI tools. I don't even have to try that hard to receive really bad code and get myself into a loop of pointing out errors, the AI saying "I'm sorry, you're right..." and repeating the same useless code again. This is only one example. You probably heard about the problem of generative AI not being able to count the 'r' in Strawberry, or similar things. Those problems are not the exception, they are the norm.
This is where I start to ask myself how useless someone must have been if he gets 10 times more productive thanks to AI. Don't get me wrong: I believe there are fields where AI really can help you sift through huge amount of data or improve some processes. But those applications usually don't require a LLM.
Let me ask: What are we even doing?
We let the algorithms and models take the things we like, the things that make the human experience valuable. All in the name of productivity. And we use more and more energy to run those things, though we are burning away humanities future already, even without AI. Talking about the climate. There are those who think we can and should build huge machines to just capture the carbon and store it somewhere. Since those technologies only make sense without burning fossil fuels to operate them, we use solar cells to generate the energy. So, let me summarize: We use solar power to capture carbon from the atmosphere to bind and store it somehow. Congratulations: You just invented a tree.
What are we even doing?
Can we all just stop and start asking "Why" more often? And maybe we might find some better ways to exist. If we can get there I'll start to be optimistic about technologies and AI. Until then I'll stay a skeptic and maybe even a bit cynical.
Top comments (2)
sounds to me like you need to just start using the tools and you'll find they:
example:
How many developers have tons of ideas they can never get done, AI makes it possible
Complex things that would take you weeks or months to learn, you can just ask and have at your fingertips, with some caveats, but damn dude, it's incredible.
Of course spammers gonna spam
and i guess haters gonna hate
The abundance of posts on "how to write better prompts for LLMs" surely shows that the promise of interacting with computers using natural language is either miserably failing, or deeply misguided. If you have to describe the specific language needed to get the required results from a system, you may as well be teaching programming - language we already have to tell computers EXACTLY what to do whilst avoiding all the imprecision & ambiguity of spoken language.
IMHO the whole idea of 'development using AI' is fundamentally flawed.
Humans seem to want to interact with machines using natural language because it 'feels right' - but maybe the whole idea is wrong... just another form of skeumorphism. Familiar, sure - but maybe not the right tool for the job... and potentially limiting.
Human language developed over many thousands of years for a specific purpose - to allow humans to communicate with other humans. Computers are a wholly different thing that neither think or 'understand' like we do. I think some form of AI definitely has a place in improving our interactions and use of computers, but from my experience so far I'm pretty sure LLMs (glorified autocomplete) almost certainly aren't the way forward.