DEV Community

Cover image for Dear developer, are you one AI model away from being replaced?
James Matson for AWS Community Builders

Posted on

Dear developer, are you one AI model away from being replaced?

From nerdy excitement to existential dread, LLMs evoke so many emotions in developers. Is one of them fear? Should you be worried about your job?

Once a nerd

My day job is as a software engineer.

I design and build software and lead a team of people who do just the same. I don’t work at a FAANG or one of “the big” banks, so I certainly don’t think of myself as any kind of subject matter expert on anything in particular. I’m just a guy who really, _really _loves software.

As a child of the 80s, I grew up around what could be thought of as the first wave of home computers. My childhood best friend was — for better or worse — a Commodore 64 and then an Amiga 500 (followed by the awesomely powerful Amiga 2000HD — 52MB of hard drive storage! Pwhoar.)

I started experimenting with code when I was around 10 years old. At first it was C64 Basic following arcane instructions sets laid out in cassette tape adorned magazines like ZZap64! and later on the Amiga using long since dead languages like AMOS.

Image description

Yep. I really was that young once upon a time, never far from a computer.

My early teenage nights were spent leaving my Amiga on overnight to render frame after slow frame of seemingly impossible landscapes using Vista fractal generators. Better than any alarm clock was the lure of waking first thing in the morning to find the meagre 30 frames of animation had finished calculating, my reward a 10 second ‘flythrough’ of a mountain landscape.

Image description

Today? A blurry low resolution mess. In 1992? Magic. Source: http://www.complang.tuwien.ac.at/

I think I was fated to go into a career in technology. From the moment a Commodore 64 appeared — with all the mystery of childhood Christmas — under the tree, I was hooked.

I enjoy my job, a lot. I’m one of those people who feels blessed because whatever it is I’m doing in my day job I’d get a great deal of enjoyment out of doing “just because”. That’s a rare gift, and one I’m thankful for. It’s part of the reason that I write here, on Medium. I don’t just love software and tech, I love talking about it, writing about it, just — being immersed in it. Naturally, as soon as the background hum started up about large language models and ChatGPT, I couldn’t wait to get my hands dirty checking out this new technology.

A conversation with the future

“You never change things by fighting the existing reality.
To change something, build a new model that makes the existing model obsolete.” — Buckminster Fuller

The first time I loaded up ChatGPT (circa 3.5) and asked it to create a method in C#, I was astounded. Within seconds it spat out a perfectly coherent, well structured method to do something (I can’t remember what) and I sat there giddy.

I’d tinkering with all kinds of AI/ML services and models in the past but this — this was sorcery.

Over the next few minutes I engaged in an escalating game of ‘can you?’ with ChatGPT and apart from the hiccups and hallucinations (libraries or properties that don’t exist for the most part) it pretty much delivered every bit of C#, Python, Cloudformation, or anything else I threw at it.

I didn’t need to be a futurist to know a paradigm shift was being delivered to me straight through the unassuming chat interface of https://chat.openai.com/ (a website — for what it’s worth — that still ranks in the top 30 most visited websites in the world, ahead of stalwarts like eBay and Twitch).

After I got over the electric jolt of excitement and possibility, another feeling began to settle over me like a heavy blanket, slightly — but not completely -snuffing out those other feelings.

It was a sort of, anxiety maybe? Was that it? Anxiety? Or, fear maybe.

Yes. That’s it. Fear.

It wasn’t particular strong. It wasn’t as though I ran from the room screaming obscenities and setting stuff on fire in a mad panic, but the feeling was definitely real and it co-exists — even now — with those other feelings of excitement and possibility.

It dawned on me in those moments, and many since, that what I was looking at — LLMs trained on a corpus of knowledge I could never hope to hold with abilities of recall beyond any human — was a seismic shift in my chosen career. A shift the magnitude of which I may not fully grasp until I’m already in the middle of it.

Image description

A future of code generating code generating code generating code. Where is the human in all this? Source: https://creator.nightcafe.studio/

This isn’t intellisense or a glorified spellcheck. It’s not helping me find the right property or method from a list. It seems — at least on the surface — to be reasoning. To be creating. To be ‘doing’ of its own accord. It’s able to take simple non technical instruction and produce completely feasible code, the natural extension of which would be what? Codebases? services? Frameworks? Entire solutions?

I’d read enough to know what’s at the heart of a large language model, but the illusion was still strong enough to unsettle me.

So there I was — and still am — a guy who loves to write software, whose built his entire professional identity around an affinity for and a talent which technology, watching this large language model do with effortless precision a chunk of what I do.

Not everything I do — not by a long shot — and we’ll talk more about that in this piece, but to simply ignore the coming impact of large language models on the world of software engineering feels like a disservice to myself, to my future, and indeed the future of everyone that chose a career path like mine.

Even as I write this, I haven’t found a way to reconcile how I feel about what the future holds, or even what credence I give of that future being an entirely positive one, but what I have done over the past few months is listened to and read a lot of people far smarter than myself — on both sides of the argument — and I’m here to give you what little wisdom I have on the topic, in the hopes that it helps you clarify where you fall on the spectrum of a software engineering future that will have AI embedded in the very day-to-day fiber of how we work.

Attention is all you need

Before we settle into a discussion of just how fearful of current day large language models (LLMs) the average software engineer should be (if at all?) it’s worth taking a quick trip down memory lane.

It might _feel _to the casual observer as though LLMs (à la GPT, LaMDA, Falcon, Bert etc) just barreled into our lives like a token-predicting avalanche from nowhere, but the truth is what we’re seeing today is simply the tipping point into the mainstream of a long history of improvements in machine learning, none quite so impactful to the LLM of today than the transformer.

In 2017, a group of researchers from Google Research and Google Brain (the organisation that parent company Alphabet has since merged with DeepMind to form Google DeepMind) published a technical paper titled ‘Attention is all you need’.

The paper (available here ) has been talked about ad nauseum online so I’m not going to spend a lot of time on it, but it’s worth pointing out that it was this paper and the revolution in deep learning that followed which placed us exactly where we are today.

Until the advent of the modern transformer mechanism, deep learning had coalesced around two types of neural networks: Recurrent-Neural-Networks (RNN) and Convolutional-Neural-Networks (CNN). (side note: You can check out another article of mine where I use RNN to create music with terrible results!).

With the development of the transformer model for natural language processing (NLP), the use of RNN or CNN mechanisms were dropped in favor of the concept of ‘self attention’.

Image description

Recent timeline of deep-learning (source: Perspectives and Prospects on Transformer Architecture for Cross-Modal Tasks with Language and Vision)

This ‘self’ attention mechanism in the transformer model doesn’t look at the relationship between words/tokens one part at a time or with any relevance of whether the words are in the encoder or decoder, but instead look at all the tokens together with the ‘self’ part focusing on the relationship between each token/word and every other token/word.

“The simple act of paying attention can take you a long way” — Keanu Reeves

The transformer concept has a lot more to it than that, but here are the important takeaways: It’s powerful, scalable and supremely responsible for the large language model driven world we’re all living through today. To grasp the enormity and complexity of some of these models is simply not a task for which any single person is capable.

Hunyuan, the latest LLM developed by Chinese company Tencent has as many parameters as there are stars in the milky way galaxy; 100 billion.

Friend or foe?

So here we are. It’s 2023 and the battle lines are drawn either side of the AI debate with experts on either side and millions of ordinary people like you and I sitting in the middle wondering what’s going to happen.

As I mentioned earlier, the only way I’ve found to make some sense of the arguments amidst the daily deluge of AI/ML products and services and the absurd speed of advancements is to take careful stock of some of the arguments for — and against — the proposition that AI (generative AI to be precise) is a boon to humanity.

And there are no shortage of opinions being put forward and podcast conversations bubbling in every conceivable corner of the internet. Philosophers, technologists, politicians, engineers are all taking up the mantle either for the innumerable benefits of AI, or warning us of the borderline apocalyptical nature of its avalanche across the social fabric of our world.

Image description

Google search trends for ‘generative ai’ over the past 5 years. Source: https://trends.google.com/

It’s with that in mind that I put down some of my thoughts here, as one lone developer trying to make sense of the wild west of genrative AI and its impact on — selfishly — him.

Sunshine and Tokenisation

The argument that large language models are a boon to the developer is an easy one to make. At its core, a popular mainstay of the supporter is to pitch AI as giving developers superpowers. Hey, as a developer myself I don’t think there’s even a bit of hyperbole in that statement. As an avid user of GitHub Copilot, CodeWhisperer and other AI code ‘assistants’ I am here to tell you that having a widget bolted into your IDE that with a little bit of prompting or the start of a method name will spit out reasonable entire functions is indeed powerful.

I treat ChatGPT and Co-pilot like a super-intelligent search engine tailored to my specific needs. Rather than having to scour Stack Overflow for answers to problems that are maybe 5% — 70% the same as mine, I get back a crafted response, code block or architecture suggestion that’s 99% exactly what I was looking for. Has it sped up my work? It depends entirely on the language or area I’m working in. For example, if I’m looking for some python code or PostgreSQL script to get me out of a jam, I can rely almost entirely on ChatGPT to give me what I’m looking for, with my role relegated to spotting any obvious errors, and running it to check the outcome. It can be a bit more miss than hit when I’m working with Terraform however.

Overall though, it’s easy to feel right now (and I stress, right now) as though being a developer has never been better. As I write code in Visual Studio Code, my AI Copilot is right there in the sidebar, ready to help me find bugs, suggest improvements, and help me look for ways to optimise functions I’ve written and think about technical problems in ways I’d never imagined.

But even when AI works this well, I can feel — mistakenly or not — that what I’m seeing is just a delightful seed that could bear terrible fruit in the future. What about when AI does much more than just play the part of my humble sidebar assistant? What about when it is the entire IDE? When it’s the idea, the development and the execution all in one? Is the role I want to play in this relationship one of just pasting stuff in and running it? Or doing validation of work already completed? Considering this possible future leaves me with all kinds of questions about what it really means to ‘create’ software. How much do I value a super power if it takes away the joy of making? Of building? Am I maybe the only developer who — like the painter at a canvas — is driven in part by the sheer joy of putting brush to canvas?

Supporters of code assistants like GitHub CEO Thomas Dohmke tend to see only the sunny side of this equation. I’ve been lucky enough to be in the room with Mr Dohmke as he’s espoused the coming AI-led revolution in ‘developer happiness’ and his positivity is infectious.

Just read his words in a recent interview from Madrona.com. “The generative AI-powered developer experiences gives developers a way to be more creative. And, I mentioned DevOps earlier. I think DevOps is great because it has created a lot of safeguards, and it has made a lot of managers happy because they can monitor the flow of the idea all the way to the cloud and they can track the cycle time….but it hasn’t actually made developers more happy. It hasn’t given them the space to be creative. And so, by bringing AI into the developer workflow by letting developers stay in the flow, we are bringing something back that got lost in the last 20 years, which is creativity, which is happiness, which is not bogging down developers with debugging and solving problems all day but letting them actually write what they want to write. I think that is the true power of AI for software developers.”

Well that sounds OK -right? I like being happy. Usually it’s donuts that make me happy, but if AI can do it too then I’m here for it.

The statistics offered by Thomas and the team behind one of the most popular AI code assistants ‘Copilot’ support the broad appeal of superpowered AI assistants for code generation. He has on many occasions talked about an up to 30% productivity increase for developers that use Copilot, and believes that sooner rather than later, 80% of code will be written by the AI rather than by the developer.

And that’s just considering things from the ‘developer’ viewpoint. What about the barrier to entry for software creation itself? If the barrier weakened due to the advent of tech like ‘low’ or ‘no’ code, then generative AI is likely to see the remaining barrier simply evaporate. It’s hard to argue against that being a ‘societal good’.

Image description

Acceptance rate of Copilot code recommendations over time. Source: https://github.blog/

If you remember at the beginning of this piece I talked about how the developers role isn’t just writing code, and this is where the productivity argument comes to rest. The idea is that the writing of code is just a portion of what the developer does, and not even the best/most fun part. Outside of writing code a couple of hours a day, the developer is involved in ideation meetings, design sessions, problem solving, collaborating with business users, wrestling with requirements and generally immersing themselves in all the stuff that comes “before” and “after” the code. To proponents of the “AI will compliment” argument, these things are seen as the real job of the software engineer.

So let’s imagine that LLMs take away 50% of the coding task from engineers. On the surface — according to optimists like Thomas Dohmke — this resolves to simply being 50% more productive.

The ‘superpower’ effect.

More time for the collaboration, the human element — the ideation and connection. But is that really how things pan out? The reality of a 50% increase in automation could have two detrimental effects in terms of job stability and security. In one case, you could get 50% more output which means there are less new jobs for incoming software developers eager to get a break in the field. Alternatively, you could find an organisation that realises it needs 50% less developers, so you’ll see those people shed and put out into a market whether other organisations around them are also shedding engineers in favor of automation. At the moment, the cost to integrate AI assistants into your software development lifecycle isn’t cheap, but it’s also not exorbitantly expensive — and like with all technology — it’ll just get cheaper and cheaper.

OpenAI CEO Sam Altman, in a recent podcast with Lex Fridman responded to this potential problem of ‘10x productivity means 10x less developers’ by asserting that “I think we’ll find out that if the world can have 10 times the code at the same price, we’ll just find ways to use even more code”.

I like the sentiment, but he didn’t go on to provide his reasoning beyond believing we still have a ‘supply’ issue in software engineering (for how long?) and — in the same conversation — discussing the very real possibility that AI will obliterate jobs in some areas like customer service. Is it so hard to believe that it could do the same to your average software engineer? Even Sam manages to acquiesce a little further into the interview that he wants to be clear that he thinks these systems will make a “lot” of jobs just “go away”.

We might very well simply evolve more use cases for code, but in a world already eaten by software — what’s left? A supercharged race to tbe bottom of cost as automation takes over in an end-to-end engineering landscape.

Right now — as of November 2023 — we’re not at a place where you could see any kind of seismic shift in the developer job market thanks to generative AI, simply because it’s still new, and the big tech companies are still in a flurry launching (and in some cases re-launching) products with AI built-in (I’m looking at you Microsoft and Google!). But remember, it’s taken from 2017 to now to get where we are. From that fateful paper on transformers, to a world where code just manifests on the screen thanks to a few choice conversational words.

Where will the technology be in another 6 years? Already we’re witnessing LLMs that can execute structured methods surfaced from natural language conversations, and hallucinations — perhaps the biggest problem facing the average large language model — are lessening by using the models themselves in a verification/validation loop (LLMs validating their own outputs for truth).

The other day I saw a post on LinkedIn that proudly proclaimed that fear about AI taking jobs is utter nonsense. “The calculator did not replace mathematicians!” the author offers us with smug assurance. True, but I feel like this slightly misses the mark of what generative AI is — or at least has the capacity to be. AI isn’t the calculator, it’s the mathematician. Or it’s on a journey towards that.

If the engineering loop — from a technical standpoint — is synthesis, verification and repair, then large language models are already stitching together that loop with continually better results.

So where does that leave me? I find myself giving a future where AI simply giving developers super-powers and we’re all stupendously productive and that’s that, a fairly low credence (trying — as physicist Sean Carroll might tell us — to be a good Bayesian). Maybe 20%? Equally I can’t imagine that future iterations will simply do away with the millions of developers — myself included — in some sort of wholesale shift. That’s the view taken by Emad Mostaque, founder and CEO of Stability AI.

His take is ruthlessly simple; “There will be no programmers in five years.”

I don’t know that that’s likely either, so I’m giving that maybe a 30% credence. Maybe there’s a part of me that just hopes that isn’t likely? So you could call that a bias, but also — despite knowing humans are notoriously bad at predicting the future — I just can’t imagine things changing that fast.

In the future I give the highest credence, let’s say — 70%, the role of the developer changes over the next 5 to 10 years, with code and ‘building’ things taking a backseat to caretaking the AI. In this world, those of us who are senior developers now may very well be the last of our kind. Children who are looking at the future of their education or teenagers hoping to enter the software engineering workforce in the next decade? Those are the ones for who I have the greatest concern. In my theoretical future, there may simply be no places for them to go in a world where today — as I write this piece — 40% or more of code on GitHub is AI generated.

This is not the end

So I find myself feeling somewhat pessimistic about where the long term future of a software developer is, but I also manage to pull myself back from the brink of the true AI alarmists. If you’ve delved into that side of the argument, you’ll find people — far smarter than me — who are genuinely concerned about the advent and evolution of the AI age.

Thinkers of worldwide renown such as Sam Harris (Neuroscientist, Philosopher), Max Tegmark (Physicist, AI researcher), Eliezer Yudowsky (AI researcher) and more have sounded the alarm about what AI is already doing to our world but more importantly what will it do when it evolves where most people think it’s going to evolve — artificial general intelligence or ‘AGI’. It’s at this point that many believe the AI (or many AIs?) will achieve a level of sentience, of consciousness.

What will this mean? No-one knows, but that’s possibly the scariest part. You see, right now a large language model gives a pretty good impression of being ‘alive’ or ‘sentient’, but they’re not.

It’s a trick. An illusion. A facsimile. You can ask an LLM to write some code to take a persons e-commerce order and calculate the discounts they should be given on certain products and it’ll provide the code — but it has absolutely no knowledge of what a person is, what discounts are, it doesn’t understand anything. It doesn’t even understand that it is ‘something’, which fails the famous Thomas Nagel test of consciousness, that to recognise something is conscious you must be able to imagine that it’s “like something, to be that thing” (for example, it is like something to be a bat, but it’s not like anything to “be” a rock).

(Sidenote: I’m fairly sure I’m conscious, but I have felt like a rock on some Monday mornings. Take that how you will….)

The worry is that if (when?) these AI models gain an understanding of ontology and an awareness of itself, that it won’t exactly be aligned to what we’d consider human morals and ethics. The AI may have its own concept of ethics/morals that don’t align to ours, or perhaps something so abstract we wouldn’t even recognise it — and that could spell disaster for the human race.

It’s referred to as ‘the alignment problem’ and in Eliezer Yudowsky’s words “It’s (the AI) got something that it actually does care about, which makes no mention of you. And you are made of atoms that it can use for something else. That’s all there is to it in the end.

The reason you’re not in its utility function is that the programmers did not know how to do that. The people who built the AI, or the people who built the AI that built the AI that built the AI, did not have the technical knowledge that nobody on earth has at the moment as far as I know, whereby you can do that thing and you can control in detail what that thing ends up caring about.”_

Image description
A future where people master machines, or machines master people? Source: https://creator.nightcafe.studio/

I don’t know if or when an AI will reach this level of sophistication, that it can actually be thought of as a sentient agent, but I give the likelyhood of this happening soon (say — in the next 10 years) a low credence. Maybe 10%. Why you say? Well, I tend to think that we — humans — don’t really understand consciousness as it is. If we don’t understand it right now, how can we hope that by creating larger and more powerful ‘next token’ predictors that consciousness will somehow manifest? I wish I could remember who this is to attribute the quote to, but I heard an AI researcher the other day say something that resonated with me:

“I could conceivably generate an atomically complete model of the human urinary system on a computer. Complete with an accurate model of its inner workings, it’s fibres, structure — the works. But doing so won’t result in my computer taking a piss on my desk”.

It’s a colorful way of saying that just because you model something in the abstract, it doesnt’ mean you’re actually manifesting “the thing”. So we can keep building better and more complex large language models that do a great job of approximating intelligence and awareness, but that doesn’t mean we’re anywhere near closer to actually creating artificial ‘life’.

So all of this — this lone study of “what’s going on” in this landscape, is by no means over. The sands are constantly shifting, the growth absurd, but I think I’ve found a place to settle ‘my view’ as it relates to the dangers and fortunes of a world covered in generative AI, at least as it relates to the humble developer.

I think we’re in for a future of immense creative power, of productivity gains we’ve never seen before. An upheaval of what the traditional role of the developer is. Some of it will be amazing — miraculous even, but plenty of it will suck. There will be job losses, there will be a contraction in opportunities in the future. That’s my humble prediction. I’ll probably be okay, but I think plenty of people in the industry won’t be.

And perhaps most important of all — _selfishly _to me at least — I’ll say a long, slow goodbye to one of the things I find most enjoyable about the act of being a software developer, writing that beautiful, beautiful code.

Top comments (2)

Collapse
 
rickdelpo1 profile image
Rick Delpo • Edited

You briefly mention that we devs may be caretakers of AI in the future, but I did not hang on to every word in this article. I like the idea that we will be caretakers, as AI, in my opinion is only a tool. Remember the ole Garbage in, Garbage out principle? May I bring up a point that when doing a Google Search for a tutorial, MANY results are simply outdated code and not only are they outdated but also contain many omissions. So if Google is full of outdated code does this mean that AI also contains the Garbage in? My conclusion is that AI will need to be tightly supervised.

I wrote a Dev article about all the mis-information out there at
dev.to/rickdelpo1/13-reasons-why-d...

Collapse
 
kknd4eva profile image
James Matson

Hey RIck, it's definitely an interesting topic to discuss, and there are a range of different views out there. Thanks for reading!