DEV Community

Cover image for Why "AI" is the new "sustainable"
Ingo Steinke, web developer
Ingo Steinke, web developer

Posted on • Originally published at open-mind-culture.org

Why "AI" is the new "sustainable"

"Future workshop with AI," "creativity with AI," "blah blah blah with AI," – "AI" is everywhere: "artificial intelligence" is already the stupidest and most annoying trend in a long time! Facebook and Google also lured customers with free offers back then, but with social media, more humans were still involved. I've seen a lot of profiteering with naive hopes, but I think AI is the worst so far. Above all, I'm shocked by the extent of the blind enthusiasm for a very questionable technology.

Unpaid work and a blatant waste of energy

In contrast to nuclear power and genetic engineering, criticism of AI remains weak. Many supposedly alternative, environmentally friendly start-ups and solo self-employed people focusing on the common good allow themselves to be infected by the general enthusiasm for AI and switch off their brains entirely following the motto, "Electricity comes from the socket anyway." Artificial intelligence is not sustainable, neither socially nor ecologically.

AI also represents precarious work and a blatant waste of energy. If we regularly use ChatGPT and similar services, we no longer need to worry about our ecological footprint when using search engines, videos, and series streaming. We can also forget about avoiding waste and preferring organic products.

According to a 2022 study, a question to ChatGPT consumed around ten times more energy than a Google search. "To train an AI model, processors of hundreds of graphics cards, each consuming about 1000 watts, ran for several weeks. 1000 watts is as much as a kitchen oven," Greentech.live quoted Amazon's AI expert Ralf Hebrich as saying.

Nuclear power plants for Microsoft’s AI

"Microsoft has signed a 20-year purchase commitment for electricity from a decommissioned nuclear reactor. The electricity is needed for AI," as German tech magazine Golem.de recently reported.

This is fine meme variations generated using AI

The business model of so-called social networks such as TikTok and Instagram already relied on the unpaid work of the creative industry and hobby content creators, which in turn was used unsolicited as fodder for the AI systems that are now making the jobs of creative juniors and interns redundant. But the self-exploitation of creatives is just the tip of the iceberg: "Workers in Kenya were asked to read sometimes traumatizing texts to optimize ChatGPT," reported Daniel Leisegang from netzpolitik.org e.V. in the article Precarious Click Work behind the Scenes of ChatGPT.

Don’t switch off your brain!

Worse than calculators, smartphones, and "social" media, artificial "intelligence" also endangers human thinking and coexistence. "Just ask ChatGPT" is a call for voluntary disenfranchisement and the abandonment of critical thinking. Search engines also deliver incorrect, outdated, or difficult-to-understand results. Even experts can be wrong or bought by advertising partners. But trusting a technology you don't understand, known to invent alternative facts, and generously provided free of charge by a commercial company abroad is even more wrong than consuming Facebook and the BILD newspaper without criticism.

Unreliable in several respects

AI is not only unreliable in terms of content. There is also no guarantee that it will still be available next year in the same way, or at all, or only at a high price. The sale of Twitter to Elon Musk and its descent into the dubious hate speech and disinformation network X within a few years should vividly illustrate how little you should trust an online service you don't even pay for.

Twitter: the underestimated warning

"Following the takeover by Elon Musk, a fifth of users in Germany have left the short message service, a third are using it less," and complaints about hate speech and fake news have increased, Süddeutsche Zeitung summarized a Bitkom study.

Invented "facts," bans, and injunctions

Invented sources from ChatGPT have already led to embarrassing situations in court. Actual facts have led to injunctions against Microsoft and OpenAI by the New York Times, who were able to prove that their elaborately researched articles had been stolen. Renowned specialist forums such as StackOverflow have since banned the use of ChatGPT, so as not to jeopardize the quality of their content.

AI, creativity and art

More exciting than in science, web development, or the advertising industry, whose influence on the standard writing style of ChatGPT is still unmistakable, AI can also provide artists with new tools that, when used naively, are fascinating even at first glance. A simple example was the interactive installation "Tübingen AI Center" at the Tübingen City Museum last year: a camera filmed the spectators and played back a variation of their footage in various surrealistic styles.

Interactive AI art in Tübingen

When I wrote about generative AI about two years ago, focusing on imaging techniques, I thought the AI hype would soon be over. But it was only just beginning. I originally typed this article entirely by hand and without outside help before I added external sources and translated it using artificial intelligence.

Progress, aberration, or just another tool?

I admit that I also use AI and waste energy on things I could do analogous with pen and paper. I haven’t opened a printed bilingual dictionary for years, and without deepL and Grammarly, my English would probably be less eloquent and less easy to understand. I also use completion suggestions when programming, but I don't allow AI to generate extensive code blocks on my behalf. But when I do ask ChatGPT, I've usually already come so far with my other research that I've never gotten a single helpful hint from the stupid chatbot.

I have nothing against progress in principle — quite the opposite! But I oppose dumbing down of the people, and I think some aspects of the current AI trend are dangerous steps backward. Let’s criticize AI, but not for the wrong reasons. AI will not make us all unemployed, any more than photography would have done away with painting or even all art because all you have to do is press the shutter button, or calculators would have made math superfluous.

I have nothing against progress in principle — quite the opposite! I have been operated on in hospitals and doctors' surgeries, wear glasses and a hearing aid, use computers, and travel on electric express trains, of which there should be more. I wish we had an ecological turnaround in transportation, social justice, and a fast and reliable Internet. I wish society had set different priorities in recent decades.

AI is the new "sustainable"

"Sustainability" often isn't sustainable, but instead so-called greenwashing, i.e., a misleading advertising message. "Artificial intelligence" often isn't intelligent, but instead — you guessed it — a misleading advertising message.

"AI" is the new "sustainable": a euphemistic message that, soon enough, nobody will want to hear anymore when it has become completely meaningless and, above all, a sign of a lack of ideas and stupidity.

Use digital tools, but question them, don’t become dependent on them, and stop adding "AI" to every new product announcement!

This post has originally been published as AI: what a stupid trend! "AI is the new sustainable" in my Open Mind Culture blog, where you can also find a German version.

Top comments (2)

Collapse
 
miketalbot profile image
Mike Talbot ⭐

Hmmm, I can see the arguments about cost. Still, there are enough AI models available on open source that I can ensure the ability to provide functionality in the future without exposing myself to the charging model of another organisation if I see fit (I don't; I use OpenAI, but I could replace it).

I use AI for a range of things that I could not afford humans to do, and that, before its advent, I wasn't doing at all. All those things are attempting to make the world safer, so I'm thrilled to embrace a technology that allows me to make strides in that direction. The solutions I'm talking about suffer very little from the risks of invention, hallucination or bullshit that less well-defined processes might face.

I guess I will add "AI" to many product announcements over the next year, and I'm not embarrassed to do so. I recognise there's a tendency to over egg product announcements with AI, which now seems part of everything. Many of those great new products may generally not be as revolutionary as suggested, but, not to recognise the fundamental shift in capability would be to massively underestimate one of the most significant changes in computing I've seen in my lifetime.

Collapse
 
ingosteinke profile image
Ingo Steinke, web developer

Thanks for your opinion! Of course, I can see that LLMs and other AI have benefits, and as I mentioned, I use Grammarly regularly. However, I also use computers, which we could also argue is a waste of electricity, rare minerals, and unethical labor. What most upsets me is that kind of blind faith that everyone seems to jump on the bandwagon of a new trend, even those who might not even benefit from it, wasting time and money and switching their brains off.

I hope that you will develop helpful products with AI and I wish you a lot of success doing so!