Rather than retrospect on the year in general. This podcasts looks at how I’ve been approaching the learning of AI. Where the industry has gone wrong and what to look forward to with AI.
audio links and subscribe here
AI Optimism or Pessimism
I sat down to think what am I optimistic about, and what am I pessimistic about with the Software Development industry’s use of AI.
I see a lot of positive benefits for AI use in the Software Development Process.
There are many risks involved with adding AI into products and we should rightly be concerned about those and try to identify how to test AI enabled applications properly.
Many of the statements concerning how Software Testing has to evolve concern me. e.g. more technical skills, more focus on risk, point out flaws in process, identify architecture risk. I thought Software Testing already did this. So perhaps one good thing from AI is an evaluation of what part Software Testing plays in the Development Process and what are the fundamental skills and knowledge we need in order to Test software effectively.
AI Productivity
I’m incredibly optimistic about what I can personally do with AI. It’s massively increased what I can get done, especially in programming, content creation, and marketing.
But companies are cutting jobs and blaming AI. But companies have always looked for scapegoats when trimming staff AI is just the latest excuse.
If leadership truly saw their people as the reason for their growth, they’d use AI to amplify productivity and expand the team’s capabilities, not use it as a reason to shrink down.
“If companies viewed the staff as a core engine for growth, they wouldn’t want to get rid of people when AI comes in. They’d want to use AI to make the staff more effective and grow the company. Clearly they don’t.”
Many management teams see staff as a cost center, not as an engine for growth. Sales gets viewed as the engine of growth and is often paid on commission, so it seems cheap. The product is a ’thing to be sold’. And management… are always essential.
Some organizations don’t think “how do we help our people do more?” They take the position: “how can we do the same (or less) with fewer people?”
AI is the scapegoat. Not the cause.
Short-Term Pessimism
AI is overhyped. Leadership decisions are being made based on hope, not the technology’s actual capabilities.
In the short term, I’m a bit pessimistic, not about AI itself, but about how people (management) are choosing to react to it. Too many seem to want to use AI to restrict and constrain, instead of to grow.
But our work is already changing. New tools exist. We can’t afford to be complacent. We need to experiment and figure out where AI makes sense.
How Should We Use AI?
Not every product needs AI enablement and features. Too much risk, particularly if you do not know how to test the results.
- I do NOT use AI for direct human-to-human communication.
- If you get an email from me, I wrote it. If I get one from you, I read it, not an AI.
- I would not use AI for company procedures or performance reviews.
- Adding AI to these human, recognition-based workflows is a mistake.
- I won’t use AI as a buffer between people.
- Don’t use AI to hide. Engage.
I do use AI to summarize public podcasts, blog posts, etc, Stuff I choose to consume, but not personal, direct communication.
Management is about people, and putting an AI buffer between you and your team leads to bad communication and alienation.
“Your boss should already know what you’re doing. They should be helping you explain the value that you’ve added. This is a human process of recognizing value. It’s not something that we add AI into the middle of.”
How I learned AI
When I started playing with AI, I created a podcast summarizer. My process for learning anything new is to find a project and use the new technology to build something.
I wanted to consume more podcasts but couldn’t keep up. So, I wrote a tool using:
- Hugging Face libraries
- Whisper AI for transcription
- Different LLM models for summarization
- Prompt engineering
- Ollama for running models locally
- OpenRouter for connecting to cloud-based models
I did everything locally as much as possible, partly to learn the limits before relying on cloud or paid models. Only after learning the limits did I see real value in some of the paid services.
Don’t believe in ‘magic’ from big models until you know what smaller, free models can actually do.
Step By Step Adoption
Step 1: Chat Interfaces
At first, I used AI the same way most people, from a chat interface, like ChatGPT or Claude. It worked but had limits. I wanted something embedded into my IDE.
Step 2: Embedding into the IDE
Now, I use a plugin called Continue, which works in both IntelliJ and Visual Studio Code.
- Initially connected to Ollama (local code models)
- Later I Switched to OpenRouter for bigger cloud models
So now, I can:
- Jump into a chat interface via my IDE
- Still use chatbots like Claude/ChatGPT for generic things
- Never look at Stack Overflow anymore, everything’s in the IDE chat
Step 3: Agentic Coding
I started using agentic tools, like OpenCode, on the command line. They can scan my whole codebase, not just what’s open in the IDE.
I use it to:
- Create a page object model for a URL
- Write a test for that model
- Generate K6 Scripts
- Create swagger files
- Write scripts
- Write API Abstractions
- … so much
Now instead of:
- Writing the test
- Building abstractions and models manually
- Iteratively improving
I let the AI generate code, then I come in and change or adapt what I need.
“I’m using it as a tool to help me create things faster. Then I come in and use my knowledge and experience to review the code or expand it.”
Step 4: OpenSpec for Better Requirements
OpenSpec helps generate and maintain evolving and up to date requirements documentation.
- Specs are continuously updated as requirements change
- OpenCode has documentation to work from, in addition to my prompts
When I use OpenSpec, I’m letting the Agentic Coding Assistant write even more code, than when I use it iteratively at the CLI.
This has helped me:
- write a new application faster
- Keep requirements and documentation in sync
- Automate coverage incrementally
I’m not sure how well it would scale to a full team, but it works well for a solo dev.
But…
- AI sometimes makes strange design choices
- If you don’t review and refactor, your codebase can fall apart fast
- Human review is still necessary
AI saves time, but still requires fundamental core skills to evaluate their output and fix them when they go wrong.
I use AI for programming all the time, but not for actual testing.
Agentic AI in Testing
I’m beginning to look into Agentic AI tools specifically designed for testing.
I’m currently experimenting with AQE Fleet from Dragan Spiridonov
Pros and Cons
Right now, I am optimistic about AI empowering individual development team members. It makes us faster, and the tools are getting better.
But I am pessimistic about the way companies choose to adapt to AI. Focus on headcount reduction instead of increased effectiveness and growth.
What is Testing?
“I keep seeing posts that testers need to evolve into quality engineers and risk analysts and customer experience advocates because we’ll be reviewing the systems for risk more than we’re and testing them… but I thought we already did those things.”
People keep saying testers now need to be risk analysts, customer experience advocates, technical experts, etc. But that’s what good testers have always done. It almost feels like AI is forcing companies to rediscover what software testing really is, and always has been.
Join our Patreon from as little as $1 a month for early access to videos, ad-free videos, free e-books and courses, and lots of exclusive content.
Top comments (0)