It's been a while since my last post. One thing that has been occupying my mind lately is AI. I think this post won't have much structure — just some ramblings about AI.
In 2017 — almost ten years ago, how time flies — I did some further education in AI. At that time, deep learning was the big thing. Hardware advances allowed for deep neural networks with multiple hidden layers, the way it had been imagined since the 50s and 60s, starting with the perceptron. I was fascinated — by the possibilities, but also by the relative mathematical simplicity. Specialized, trained networks could, for example, find cancer in medical imaging. A neural network doesn't get tired, isn't distracted by private issues, and doctors do too many hours anyway.
Now, LLMs have entered the world. I still think they're not smart and I'm questioning the future of software development — and maybe even the future of the world and humanity. I use LLMs mainly for learning. My favorite use case is using it as a sparring partner: discussing ideas and approaches. One thing I recently started doing, is setting up "think tanks". For example, I tell ChatGPT to assemble a group of software architects and experts who have different views and then let them dissect an idea of mine, or come up with a solution of their own. I learn a lot that way.
Not having to write boiler-plate code sounds good on the surface. But I think I sometimes need the "downtime" of doing something relatively "brainless" for a few minutes — to recharge, or to let my mind wander and come up with new ideas. And if it's more than a few minutes of repetitive work, I automate it — the old-school way — as many of us have done before AI was a big thing.
I'm ambivalent about AI when it comes to juniors. On the one hand, it allows them to punch way above their weight in many ways — speed, sometimes even quality. On the other hand, junior positions seem to be shrinking. I've heard tech leaders say we'll only need senior developers to check and fix what the AI produces. That raises questions: Is that work senior developers actually want to do? How do developers reach a senior level if we stop hiring and training juniors? Or, is the idea that AI won't need checking and fixing by the time the current generation of senior developers retires?
It feels empty. I'm pondering whether my problem is that I always loved the process, the craft, and never the product. Should I just find a way to use AI to make a lot of money, jump off the train, and follow the craft without any pressure? And if I did, would I still want to?
I've been using ChatGPT for a while now, and I've started dabbling with Cursor and agentic AI in other IDEs as well. I see the benefits — the speed, the productivity — but I'm not sure whether the joy of developing can persist in the long run.
Maybe that's the real question AI forces us to ask.
What are your thoughts?
Top comments (0)