DEV Community

Cover image for Artificial "Intelligence" and Controversial Ideas about Future Technology (2026 Update)
Ingo Steinke, web developer
Ingo Steinke, web developer Subscriber

Posted on • Edited on

Artificial "Intelligence" and Controversial Ideas about Future Technology (2026 Update)

This article about AI was originally written and published in 2022, when the LLM-based genAI-hype was just beginning and we hadn't heard about Claude and later alternatives yet. I published this just a few days before the hype about ChatGPT, discussed in the final chapter last updated in 2026. I don't fear that AI will make my job obsolete any time soon. The hat swap hallucination shown in its iconic cover collage and the fundamental issues of machine learning models are still valid now.

Using AI as a Creative Coder

This article is intended for developers with some prior knowledge of web technology. A less technical article with a greater focus on computer generated art, cyberpunk, and augmented reality can be found in my open mind culture weblog, where I have since published various posts about genAI and its implications on creativity, coding and related jobs:

Related DEV posts include 8 Alternatives to AI for Coding and Creativity and If Writing still Matters, How to Do it Right and Avoid AI Suspicion? discussing genAI aesthetics and why human content keeps striving in communities like DEV.to.

The Swapped Hat Hallucination

The cover image features original photography side by side with seemingly similar imagery created by a computer. Note that the artificial versions of myself are wearing a black hat that does not exist in reality, but only as a detail of the street art mural I had been standing next to.

Image collage showing a man next to a graffiti of a character wearing an orange hat on the left, while on the right it is the human wearing a black hat not present in the original image.

^ Detailed visual comparison of the swapped hat hallucination.

Discussing the Future of the Web and Technology

In the ongoing discussion about new technologies, some of which are often subsumed under the umbrella term of "Web3", I try to keep an open mind for legitimate and helpful use cases, as I had been fascinated by early metaverse and cyberpunk ideas in science fiction literature. I also loved the idea of an open, decentralized, interconnected global network which is the internet, or which was the internet before it got more commercialized and centralized. Its "western" zone is now dominated by an oligopoly of companies mostly based in California, USA. This might be better than the censorship and state control in some other parts of the world, but it is still far from the original concept.

Intelligence vs. Machine Learning

Another popular misconception, apart from "Web 3", is "AI" or "artificial intelligence", a misnomer for applications making use of machine learning. The latter expression might make it more evident at first sight, what is actually happening: much like a search engine or a diligent student, "intelligent" applications reproduce and remix existing knowledge and artwork. Consequently, they also reproduce misconceptions, stereotypes, racism, ableism and other biased aspects often unknown and unnoticed.

In the current heated discussion, I wonder why so many fellow developers keep getting overly excited by the new features, fearing to become obsolete due to new technology, or otherwise criticizing openAI for the wrong reasons, becoming easy prey for the latest fad bootlickers.

"Luddites"? Getting Upset for the Wrong Reasons

Despite all of the other reasons to get upset about (like climate change, war, poverty, politics, pandemics) and despite so many positive advancements on the other hand (eco-tech startups, non-profit communites, diversity, and some specifically developer-related advancements like new CSS features and coding assistance based on machine learning), I decided to dedicate one blog post to the latest hype, to add my own limited experience, and some inspirational artwork. Fast-forward from 2022 to 2026: AI has become mainstream, skeptical doesn't mean "luddite" anymore, and everyone seems to know what "vibe coding" means.

Let's have a look back into history though, when many of today's commodities were nothing but science fiction:

Cyberpunk Literature

Screenshot of image search results for early cyberpunk literature

There are several blog post about futurist novels that include descriptions of virtual reality and global communication, like The Sheep Look Up by John Brunner and Snow Crash by Neal Stephenson.

While some of our current technology and future research might have been inspired by literary sources, it falls short of its potential.

As I replied to A.J. Sadauskas on mastodon.social,

the current Web3 enthusiasts offer no efficient alternatives to the commercial and centralised Web 2.0, while the fediverse and IndieWeb movement focus on the decentral and robust principles that the internet was build upon and which made email, usenet (NNTP, described in RFC 977 in February 1986), HTTP and HTML such a success in the first place.

Just to mention some noteworthy reads again: the web has no version numbers, and "Web3" is going great (not!)

Is Artificial "Intelligence" dumb and biased?

Machine learning refers to the fact, that we can train algorithms using input data, not only accelerating the time it takes to develop complex applications, but also to create interfaces generating unexpected output in a way that makes them seem to be sentient and intelligent.

But feeding large amounts of mainstream culture's output into machines tends to reproduce undesirable prejudice and bias found in our society and our past to present culture.

This phenomenon is not limited to machine learning, but when it manifests in code and machines built and documented by human teams, it might be easier to point out and adjust.

Read how Dr. Joy Buolamwini is fighting bias in algorithms and how to make technology serve all of us, not just the privileged few.

Digital Artwork thanks to Machine Learning

The Open AI art movement on the other hand already created tons of detailed high resolution images, either photo-realistic or looking like a handcrafted painting, often creating an "uncanny valley" effect due to misplaced details and seemingly unimportant errors that no human artists would ever come up with. While many digital artists seemed to favor a sinister, dystopian gamer aesthetic which has already been popular on platforms like deviantart, other artists, like my friend Andy "Retinafunky" Weisner, experiment with said flaws and different aesthetics.

Screenshot of AI artwork by retinafunky on instagram:

Criticizing AI Art for the wrong reasons

Some traditionalists criticize AI Art for the wrong reasons, denouncing it as plagiarism or no real work, as they fail to see innovation and creative effort.

Looking back in history, many famous artists had assistants, many of them stayed anonymous, some got famous as well. So we might conclude that you can be a brilliant artist without being the one painting every single brushstroke.

When photography was not Art

Photography, now an accepted art form exhibited in galleries, was criticized in the beginning in a discussion much like many of today's controversies about algorithmic art and AI art. When photography was not art, "the fear has sometimes been expressed that photography would in time entirely supersede the art of painting. Some people seem to think that when the process of taking photographs in colors has been perfected and made common enough, the painter will have nothing more to do."

Just like with photography, to produce a stunning work of art using modern algorithmic tools, you can either be extremely lucky, or you have to experiment and be inspired, taking your time to learn the tools and parameters, evolve and improve your art over time.

But what are my points about AI art them? I fear with AI art, we are passing too much power to algorithms thus losing control and losing touch to the real world, nature, people and social, ethical and environmental topics.

Nothing left but random squares?

Here is an image of myself in front of a poster showing the United Nations' 17 Sustainability Goals (SDG). All variations of this picture created by the public version of Open AI's Dall·E distort the icons and text and replace it with illegible symbols or random letters.

I fear that this might sum up how the current "AI" systems view our world, and it shows that they are either not that intelligent after all, or that their capabilities might prioritize aspects that take the relevant meaning out of our culture, valuing style, aesthetics and presentation much higher than content and context.

But then again, this proves the point, that we still need actual artists, and that Dall·E, Midjourney, and other tools are power tools, but still not very useful without human interaction.

Fast-forward to 2026 again, AI is much better at text recognition and generation. Dall·E has been integrated into ChatGPT, and Google's Gemini has a convenient contender known as Nano Banana. DEV's Meme Monday frequently features AI-generated artwork, but its cliché cartoons are regularly dismissed as not being funny.

I think that time will tell. When practiced seriously, AI can be a valuable and innovative tool for digital artists.

Other forms of Digital Art

There is creational art, creating images, objects, or text, done by creative artists like bleeptrack does. There is augmented reality art, combining actual real-world artworks seem to become alive using AR apps like artivive when looking at an enhanced painting.

I will follow up and dig deeper into the details of the artistic aspects of digital technology in post at open-mind-culture.org.

Now let's revisit an aspect of "Web3" that I am 99% critical about: NFT and the energy consumption of some of the latest trends in information technology. Training a single AI model can emit as much carbon as five cars in their lifetimes. It seems hard to calculate the CO₂ emissions of NFT "mining", but the popular cryptocurrency Ethereum uses about 31 terawatt-hours (TWh) of electricity a year, about as much as the whole of Nigeria, according to an estimate based on the Ethereum Energy Consumption Index.

NFT: a Good Idea Misused by Scammers?

You may have seen the various images of a "bored ape" cartoon character, often used as a profile picture by people not even creating any artwork whatsoever, trying to profit from the hype around blockchain, cryptocurrencies and non fungible tokens (NFT) investing a lot of money in hope for return of investment.

Bored ape images and a news headline

Wasting energy to deceive people with NFT and the Metaverse

NFT and cryptocurrencies offer people to participate the profitable art market and other investments without having a bank account, a credit card, or being recognized by established gallery owners. NFT also created a large black market to scam aspiring artists, developers, and other hopeful individuals. Mining cryptocurrency using energy-consuming calculations in a blockchain is a waste of energy that could better be used for other purposes, even more so in the face of human-made climate change already starting to destroy our planet.

"Recreating" extinct species and places in a virtual enviroment, like Tuvalu "uploading itself into the metaverse" only accessible by using visual technology (headsets, cameras, displays) does not help either, and the current pitiable state of the so-called metaverse makes it look even more ridiculous. It reminds me of the final scene of the dystopian science fiction film Soylent Green.

A Second life as boring as the first one

A virtual reality does have its benefits, but unlike a real environment, it does not provide sunlight, fresh air, delicious food and have you ever tried to dance or swim in a virtual environment? You might remember Second Life: most people seemed to enjoy creating a second life as boring as their first one.

Same with #chatGPT and visual image generation: many people seem to get excited about the output that looks impressive at first sight. But like the images' uncanny valley artifacts, take a good look at generated code unless all you plan to do is submit a solution to a coding kata.

Summary: What stays beyond the Hype

"Web3" missed the point of their alleged goals like decentralization, equity and helping to create a better, more diverse and creative world through digitization.

Web3 enthusiasts wasted energy and compromised security and integrity of data and money. Early adopters gave up freedom and agency getting manipulated by algorithms and greedy companies. Parallels to the later genAI hype are obvious.

"AI" in its current form is still a hot hype at the time of writing, but it's alread clear that there are useful applications with or without unreliability or an anticipated "AGI" breakthrough.

Copilot, Claude, and ChatGPT as tools for Developers

We can use artificial "intelligence" as a tool. As a tool for artists, as a tool for copywriters, and as a tool for coding, too. I admit that I have been using @tabnine, GitHub copilot, JetBrains context actions, and static code analysis like eslint, stylelint, phpstan, and code sniffer. I also used Grammarly to improve my writing, especially when posting in English, which is not my native language. I have been using all of those tools at the same time, they have saved me some debugging detours, some keystrokes, and some StackOverflow searches for generating boilerplate code and generic documentation. And I will also evaluate how chatGPT might come in handy. But I don't fear that any of those tools might seriously put my job as a senior developer in danger.

Reviewing this post four years later, I rarely use Grammarly, deepL or tabnine any more in 2026. I use AI for what it's good at: quick analysis of large text, rephrasing established knowledge and adapting boilerplate code when starting a new project. My follow-up, 8 Alternatives to AI for Coding and Creativity, originally published in 2025 and updated regularly ever since, elaborates on what people use AI for and what might be helpful alternatives to mainstream AI services in 2026 and beyond.

Updates in the AI Scene in 2025, 2026, and 2027

In the subheadline, I replaced tabnine with Claude, and I might have mentioned Cursor, Windsurf, Perplexity, Google Notebook LLM and AI Studio, or specific models like Anthropic's Claude Sonnet and Opus. Names do change and alternatives emerge, though. Ecosia's AI as an alternative to Google's AI mode, protecting privacy and operating more energy-efficiently at the cost of less accuracy might be an alternative for every-day use, while early adopters explore cutting-edge high-tech agentic system and domain-specific local LLM models at costing computation time and money.

There are hopeful digital innovations that might help us build a better tomorrow despite, so let's take our time and find out how!

Top comments (2)

Collapse
 
valvonvorn profile image
val von vorn

Is Artificial "Intelligence" dumb and biased? It is - on purpose. Someone profits.

Collapse
 
ooosys profile image
oOosys

Yes ... this aligns perfectly with my own experience: But feeding large amounts of mainstream culture's output into machines tends to reproduce undesirable prejudice and bias found in our society and our past to present culture. . A reason why I have quit using ChatGPT after the AI helped me to improve my language skills and clarity of mind to a level making it possible to see clearly its limitations of not being able to provide other help on starting the oOo journey as encouraging to follow this path to success.