DEV Community

Theodor Heiselberg
Theodor Heiselberg

Posted on

Confessions of an AI Sceptic

First of all — yes, I’m a real sceptic. By heart.
To make my confession public: I don’t believe we’ll see unsupervised self-driving cars anytime soon either.
Sounds heretical? Then I’ll call you out as a true AI believer and kindly ask you to move along. Trust me — this post isn’t for you.

AI sceptic =/= AI hater

I use AI all the time, and most of the time I enjoy it.

It helps me in several practical ways. Recently it helped me find a cheap part for my old bike — loved it. And as a daily assistant it’s often genuinely useful.

1. As a Cognitive Partner

  • Peer review (Not all my ideas are bullet proof)
  • Bounce ideas
  • Brainstorming
  • Exploring alternative approaches

2. To Knowledge Retrieve and Explore

  • "Google" syntax I just can't remember
  • Scaffold simple routines
  • Research topics
  • Comparing options
  • Finding parts/products
  • Explaining unfamiliar concepts

So yes — I both use and like AI. Remember that.

My Company Took the Blue AI Pill

And down the rabbit hole I went!

Premise

Like the rest of the industry, we cannot ignore — or at least must investigate — Big Tech’s AI promises. This week my team asked me to take a deep dive into how AI could help make us more competitive.

The premise for this investigation is to explore how AI can help us:

  • Deliver faster
  • Deliver higher quality (At least)
  • Deliver sustainable solutions (longevity)
  • Deliver more value to the customer

And if possible, test whether we can establish a workflow where we deliver solely through spec-driven prompting.

Sounds like fun - right?!??

Reading the Documentation

Day 1:
Sine I myself have never use agentic AI all the way, I spend the first day in pure chaos just getting an overview. THAT actually was fun!

Day 2:
I felt like I need something more tangible to work on. So we decided to, agentically, to setup GitHub Copilot in VS Code and create a Todo-app using our favorite tech-stack. So I just read the documentation, and bit for bit I all came together.

Day 3:
Meetings

Day 4:
I’m starting to realize that AI needs a lot of guardrails and instructions. There even exists a standard for how to describe various 'skill's' the AI needs to solve specific tasks.

This realization raised a concern:

Are we heading toward endless meetings that sound like
“You didn’t specify the XYZ prompt/skill correctly.”

If coding isn’t my job anymore, then prompting must be the new craft to master.

And if defining skills, agents, memories, and instructions becomes the core work, then companies like ours should probably start systematically managing those definitions — likely as structured .md files.

Another thing became clear: the more examples you provide, the more precise the AI becomes.

That part I actually like. It suggests we could achieve more consistent solutions.

Still, I remain cautious. These systems are fundamentally non-deterministic, and I’m not convinced they’ll reliably stick to specifications.

Day 5:
Mostly meetings. But I did get to chose the exact tech stack.

  • .NET 10, C#, Blazor, Docker, Devcontainer

Saturday:
I felt like coding.

If ultra-productive AI agents really lead to faster delivery, then any workflow this agile absolutely needs a CI/CD pipeline.

For years I’ve used Nuke Build as my preferred pipeline orchestrator. However, its current trajectory seems uncertain, so I’m considering FAKE as a possible replacement.

The Wall

Then I started prompting.

And let me say this clearly: I’m not impressed.

If I weren’t already a seasoned developer, I might easily have trusted the output. That would have been a mistake.

The AI:

  • Corrupted my Devcontainer Dockerfile
  • Tried installing strange packages
  • Added unnecessary tools
  • Overcomplicated the build configuration

In short: it made a mess.

At this stage, reading the documentation and incrementally building the project myself seems significantly faster.

One might argue that I should have stuck to a more widely known and used orchestrator. But my counterargument would be simple:

If AI can only operate effectively inside the most common and well-documented stacks — and only with heavy guidance, guardrails, and extensive examples — then its practical value is far more limited than its evangelists suggest.

Next week

Next week I’ll continue the experiment. Once the project structure is properly established, I’ll test whether AI can beat me at adding features.

Until then:

I remain sceptical.

Top comments (7)

Collapse
 
gabrielweidmann profile image
Gabriel Weidmann

Thanks for your insights. Just some questions:

Do I understand correctly that you use .NET 10, C#, Blazor, Docker, Devcontainer with the ai (also my favorite technologies) and that you use github copilot (in vs, vs code, from the cmd, cloud?).

I also had the opportunity to evaluate kinda the same toolings (github copilot in vs (really bad) and vs code (a bit bad) and also was not really convinced; the builtin agent is not very good, but also does not use the high-end models.

In my private chatgpt subscription openai codex is included, which delivers a lot better results. Also a friend of mine told me that their company uses the heavy claude models and after setting up the pipelines and everything carefully they see quite good results.

So I think you should use "better"/"heavier" models + the c#/.NET region is probably not the strongest of the ai, especially blazor is quite weak in my experience.

Collapse
 
sukkergris profile image
Theodor Heiselberg

Exactly my point. But maybe a more AI mature stack match is better. After all the customer doesn't care about the technology, they care about the value the product brings. If I actually could get x10 delivery speed, the tech stack is secondary.

Collapse
 
tombarys profile image
Tomáš Baránek

Cannot agree more. But I am a Clojure hobbyist, and I do not code for a living. For me, the joy of painstakingly creating, debugging, and understanding 99% of the code I write is crucial. It is slower, yes, but without it, I would completely lose my passion.

Collapse
 
klement_gunndu profile image
klement Gunndu

The non-determinism concern is valid, but have you tried constraining outputs with structured schemas? It doesn't eliminate the randomness but narrows the blast radius to something testable.

Collapse
 
sukkergris profile image
Theodor Heiselberg

I'll try the approach today

Some comments may only be visible to logged-in visitors. Sign in to view all comments.