DEV Community

Cover image for AI is amazing — but let's keep our critical thinking on
shaman-apprentice
shaman-apprentice

Posted on

AI is amazing — but let's keep our critical thinking on

There is a lot of financial investment and hype about AI. I have heard many different assessments of the impact of AI, especially regarding the value of AI for the software development industry. Here are two examples:

"my basic assumption is that each software engineer will just do much, much more for a while. And then at some point, yeah, maybe we do need less software engineers."

(Sam Altman)

"We know with near 100% certainty that this bubble will pop, causing lots of investments to fizzle to nothing. However what we don’t know is when it will pop, and thus how big the bubble will have grown, generating some real value in the process, before that happens. [...] We also know that when the bubble pops, many firms will go bust, but not all. When the dot-com bubble burst, it killed pets.com, it killed Webvan… but it did not kill Amazon"

(Martin Fowler)

Even if you, as a developer, don’t believe in the long-term quality of AI-generated code - and even if you’re right - it will still affect you. Why? Because most decision-makers, up to the C-level, think otherwise.

So, I think every developer should look into the AI hype. In this blog post, I will first share my thoughts on productivity gains through AI. In addition, I want to point to some aspects of AI, I believe we should talk more about.

My thesis about productivity gains by AI

I believe, AI has the capability to make us more productive. It can support brainstorming, offer implementations and architecture ideas, or give hints for issues in code. It can speed up the actual coding with auto-completions and do stuff in agent mode.

However, the initial coding is only a small part of software development. I am curious to see how the maintenance and further development of AI-assisted code will turn out. Will technical debts become a huge burden, requiring time-consuming and expensive software modernization projects? Or will AI become so good that it can still implement new features in its own mess? I suspect the former.

From my point of view, it is still a good trade-off. I believe, the biggest hurdle for software development is to create useful, impactful software and evaluate it as early as possible in the market. Once, the market is better understood and software is adopted by customers, software modernization projects are doable and economical viable. AI helps us, to do the first part faster, which is very valuable. For the second part, it probably is also a helpful tool.

Further resources I recommend reading:

  • In MIT Economics, Nobel-winning economist Acemoglu "estimates that over the next decade, AI will produce a “modest increase” in GDP between 1.1 to 1.6 percent over the next 10 years, with a roughly 0.05 percent annual gain in productivity."
  • According to DORA 2024 Accelerate State of DevOps "Early adoption is showing some promising results, tempered by caution." Organization-level performance is estimated to increase 2.3% per 25% increased AI adaption and team-level performance to increase 1.4% per 25% increase in AI adoption. However, software delivery performance seems to be negatively impacted by AI adoption. Effect on product performance seems to be unclear.
  • Based on a randomized controlled trial study AI might not increase performance at all for complex systems with experienced developers who are already familiar with the code.
  • Results of latest Stack Overflow's 2025 Developer Survey show increase in technical debts due to AI usage.
  • No, AI is not Making Engineers 10x as Productive

Points I think we should talk more about

Let's dive into points, I think we should watch critically.

Availability and purchase price?

AI usage for consumers is currently unbelievable cheap, as multiple companies are competing for market share. I assume that this is only a brief, temporary situation. Based on an estimation in cnbc Microsoft has lost $5 billion in 2024 with OpenAI. At some point, they will monetize their investment or stop offering it. There Is No AI Revolution is the most negative article I found, as a contrast to all the positive hype, expecting the latter.

Indirect costs of AI investments?

In addition to the direct costs, I also see indirect costs.

Resources could have been spent elsewhere

Research time and funding are limiting factors in the software industry. Putting a lot of time and money into AI tooling and adaption takes away resources from other projects. Are those AI initiatives really the best investment of time and budget? Or are we neglecting more impactful alternatives out of fear to miss the AI train and be left behind?

As a side note: Personally, the fear of being left behind, if we don't adopt AI right now, is fascinating to me. The barrier to entry for AI tools is amazingly low. Just write what you want to your AI chat. Read one or two blogs about best prompting practices, maybe watch a few videos and experiment what gives you the best results - that's it.

Not all (AI) features are good features

Some features just clutter up the user experience and negatively impact product value. For example, when I am looking for a new microwave, I immediately rule out those with more than 2 buttons; I just want to heat up my stuff and having to go through multiple button combinations is annoying.

AI opens the door to a lot of possible features. But just because you can integrate AI, doesn't mean you should. Let me point to Firefox's AI-enhanced tab groups. For me, it adds exactly zero value. On some computers, it led to massive CPU usage and energy drain. That does not only clutter up the user experience but actively hurts the usability.

Duolingo's AI-first bet

In April 2025, Duolingo announced plans to become an AI-first company. The immediate impact was a big increase in share value and a media shitstorm (designerly.com). Since then, the increase in share value has reverted, but the real revenue and user count seem to be stable, for now. How much of future change in revenue, user count, share value etc, can be tracked back to this decision is hard to tell. Certainly, the product decision is a fundamental change and it is unclear whether it will increase profitability, or cause users to leave the platform in the long-term. I prefer a more risk-less product development, when you already have a very successful product.

Security

AI agents enable a complete new spectrum of security attack vectors. For example as reported by msn.com, emails can contain invisible font instructions to tell the user, that his password was leaked and he should go to a malicious site and reset his password there. The user won't see nor read it. But an AI summarizing this email might alert the user to do as instructed. Even worse, if the AI can act on its own, it might directly leak your password as instructed. The underlying core problem is also called the the lethal trifecta for AI agents.

Besides some beliefs, AI-generated code is not free of security issues - quite the contrary: As summarized in this security incident a vibe coded feature led to a malicious release of well-established Nx build tool, leading to at least hundreds of infested computers.

Environmental and societal costs?

Research on the environmental impact of AI is in its early stage. But it is undeniable, that AI training and usage consume a lot of energy, critical raw materials and water, as for example reported in MIT News.

Research on the societal impact of AI is also in its early stage. But you don't need much imagination to see that misinformation strengthened by AI can disrupt democratic processes or that the accumulation of capital and power in the hands of a few AI companies can worsen global inequality. See for example originality.ai/blog for more on it.

Legal risks

In a landmark copyright case, Anthropic has agreed to pay $1.5 billion to authors as compensations for using pirated content as training data. This settlement is focused on the pirating part and does not resolve the general copyright issue around generative AI. But it does point toward a direction, that compensation for copyright is needed and that it will costs billions of dollars.

Another open question is whether the creator of an AI can be held legally accountable for the output of the AI. A Professor from Aarhus University shares in this article his concern "that probability of the hypothesis of generative artificial intelligence chatbots fueling delusions in individuals prone to psychosis being true is quite high. If it is indeed true, we may be faced with a substantial public (mental) health problem". A recent, most tragic case is a lawsuit filed by the parents of a 16-year-old against OpenAI's ChatGPT. Their son chatted about his suicidal condition for two weeks. He got reinforced by the chat and ChatGPT even gave further tips how to take his life. The boy sadly, finally took his own life.

With all those challenges and uncertainties, why the mega hype?

I value AI as another tool and look forward to working with it more in the future. Am I afraid of missing out on something important, or am I afraid of being overtaken by AI technologies? No, I view the hype with great confidence and distinguish between a "good" and a "bad" hype.

The "good" hype

Products with AI's current capabilities already have the ability to earn a lot of money:

  • I use AI almost daily at work and have experimented with agent coding. It already makes me more productive and I am curious about future improvements. So, I happily pay for the services. Right now, I am paying a small amount of money for it. Imagine at what price I, you or your company would stop paying for the services and hire expensive employees instead?
  • semruch.com predicts AI search visitors to surpass traditional search visitors in 2028 and the average AI visitor is worth more than 4x the traditional organics search visitor. To make these figures more tangible: Google ad revenues from search alone in 2023 is reported by oberlo.com to be over $175 billion.
  • Imagine, AI can analyse biological patterns better than humans and help with research for new medicine. Maybe even cure more types of cancer. As summarized in Wikipedia the healthcare industry is one of the world's largest industries consuming over 10 percent of GDP of most developed nations - a huge financial potential.
  • ...

The "bad" hype

My thesis is, that the great progress of LLMs arrives at a time, when many managers are frustrated that their software projects are neither on schedule nor within the budget, while developers are a major cost driver. That is why there is a great desire for LLMs to actually get the job done much cheaper and better than the current status quo.

In addition, our free market demands growth. AI promises exactly that, making it very attractive to investors.

As a further booster, content creators on social media earn money by content views. Click-baits like "This new model is so unbelievable, all developers will be unemployed soon" seem to generate a lot of clicks and can be done again almost every week with the next unbelievable model.

All this over-fuels the hype surrounding the promises of AI.

Final words

In my opinion, AI offers amazing new capabilities and we would be fools to not embrace them and experiment with them. However, AI is not a free, magic tool, which will make all our expertise worthless.

What are your thoughts? I am looking forward to your comments!

Top comments (0)