Peter is the former President of the New Zealand Open Source Society. He is currently working on Business Workflow Automation, and is the core maintainer for Gravity Workflow a GPL workflow engine.
While the post was written by Claude I admit that it was opinion laundering, in that perhaps if Claude had a different view I would not have posted it. Also, I think the concerns of people in the short term in the face of AI are both valid and serious. So my apologies for appearing to discount these concerns.
That said there is every indication that we are not far away from machine outperforming humans in every way, and actually replacing software developers. Of course that covers a broad range; I think OS developers are not the same as front end devs, dealing with different concerns and abstractions.
Let me (rather than Claude) address each point. That way you are at least responding to a human.
1) Humans are Fast Learners.
It has been obvious since entering the industry that you learn and adapt or die. Programming in Cobol might be a sweet wicket even as its use declines, but if you do you are living on a sinking raft. Developers need to be learning new tech constantly, whether AI exists or not. The mechanism for machine learning is through loss function learning and backprop, which are energy inefficient, slow, and very expensive. Your average Joe isn't making their own models, except in narrow contexts. However, once trained they can be replicated. While Copilot and ChatGPT can't write whole systems yet there are models and technologies on the way to do exactly this. It will help do the requirements and design. Also, don't count on this limitation remaining, as work in learning algorithms continues.
2) Critical Thinking
AI is getting pretty good at this already, although not yet human level. Again the assumption that we will be better at critical thinking for any extended period is unrealistic. Why? We are experiencing a positive feedback loop in AI where the better it gets the more it is able to be leveraged to make itself better. Currently this is applicable to narrow aspects of model optimization, but this bound to expand. Also, the learning algorithms might improve, in which case critical thinking will improve dynamically.
3) Quest for knowledge (curiosity)
It is true that this kind of innate motivation does not exist with machines. Can't really say whether this will even be developed, although given a certain objective there will be instrumental objectives which fall out. For example humans have some basic innate drivers, and it is from these drivers that create human behaviour. However, it is the complexity of the human brain that results from years of learning that we get the complexity of behaviour such as language and abstract reasoning. Its seems however that Large Language Models are exhibiting the same kinds of critical reasoning skills even in the absence of core drivers.
In summary, I totally understand why software developers feel threatened by AI. While it is currently able to help developer, it is also moving toward making them redundant. Not all of them, not all at once, but enough of them soon enough that we are starting to be afraid. This will potentially cause a glut of developers, and this in turn reduce the labour cost and thus income for developers.
In my estimation the probability of a good outcome like I presented in my article is actually quite low. It isn't the probable outcome. The probable outcome near as I can see is corporate control of AI leading to increasing business efficiency as people are fired. This in turn means a crisis as businesses are enriched but people impoverished. This will lead to conflict and social disruption. The rate of change might be too fast for human adaptation in the classic sense.
Why am I even posting this? Because while I believe the concern is real the answers in terms of having faith in the human spirit and capacity fundamentally misunderstands the potential of AI for utter disruption. It is scary and all hell. The story I linked to was an attempt to find a future we can aim for that is sustainable and maintains a degree of human agency, but which accepts that machines will outperform us in every way we can imagine.
My intention is not personal attack; I'm totally onboard for finding a way to find a positive way forwards for humanity.
Here's what you're getting wrong, I think: "we are not far away from machine outperforming humans in every way".
AI can perform coding task better than humans, already, but humans know what to ask for and how to apply that result. That's what critical thinking is.
In conclusion, developers won't lose their jobs to AI. They'll lose their jobs to other developers using AI.
So if you don't want to lose your job, learn how to use AI and how to test the output, as you'll do a lot of it pretty soon, if you don't already.
From my POV, what is more troublesome is an AI who learns how to learn and therefore think. When that happens, our disadvantage will be being humans, not being developers. And it's illogical to think it won't happen, IMHO.
To be clear: I thought your article was a very interesting intellectual exercise. You yourself are clearly intelligent and thoughtful. I think where you missed was implying that Dragos’ article was somehow negative or fear-driven compared to your own. In actuality, there was little comparison to be made between the two articles, yours being a theoretical discussion and Dragos’ being a practical suggestion for present times.
Peter is the former President of the New Zealand Open Source Society. He is currently working on Business Workflow Automation, and is the core maintainer for Gravity Workflow a GPL workflow engine.
I've been working on AI since I was a boy of 11. I played with very simple Eliza and wrote my own text networks, which were perhaps an early GPT (not really). Learn about AI by all means. Remember when Covid hit there was the 'learn to code' meme, which only reinforced that not everyone can code, or frankly would want to. ML and AI technology isn't as easy. I mean you can use PyTorch to train and use a model, or just use OpenAI API, but that's like driving a car vs building one. So plenty of coders will be left behind.
Me, being the monster I am, is currently building a No Code platform. The core technology is already written, and has been in use for ten years, but now I'm working on getting it ready for release as a SaaS solution.
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
While the post was written by Claude I admit that it was opinion laundering, in that perhaps if Claude had a different view I would not have posted it. Also, I think the concerns of people in the short term in the face of AI are both valid and serious. So my apologies for appearing to discount these concerns.
That said there is every indication that we are not far away from machine outperforming humans in every way, and actually replacing software developers. Of course that covers a broad range; I think OS developers are not the same as front end devs, dealing with different concerns and abstractions.
Let me (rather than Claude) address each point. That way you are at least responding to a human.
1) Humans are Fast Learners.
It has been obvious since entering the industry that you learn and adapt or die. Programming in Cobol might be a sweet wicket even as its use declines, but if you do you are living on a sinking raft. Developers need to be learning new tech constantly, whether AI exists or not. The mechanism for machine learning is through loss function learning and backprop, which are energy inefficient, slow, and very expensive. Your average Joe isn't making their own models, except in narrow contexts. However, once trained they can be replicated. While Copilot and ChatGPT can't write whole systems yet there are models and technologies on the way to do exactly this. It will help do the requirements and design. Also, don't count on this limitation remaining, as work in learning algorithms continues.
2) Critical Thinking
AI is getting pretty good at this already, although not yet human level. Again the assumption that we will be better at critical thinking for any extended period is unrealistic. Why? We are experiencing a positive feedback loop in AI where the better it gets the more it is able to be leveraged to make itself better. Currently this is applicable to narrow aspects of model optimization, but this bound to expand. Also, the learning algorithms might improve, in which case critical thinking will improve dynamically.
3) Quest for knowledge (curiosity)
It is true that this kind of innate motivation does not exist with machines. Can't really say whether this will even be developed, although given a certain objective there will be instrumental objectives which fall out. For example humans have some basic innate drivers, and it is from these drivers that create human behaviour. However, it is the complexity of the human brain that results from years of learning that we get the complexity of behaviour such as language and abstract reasoning. Its seems however that Large Language Models are exhibiting the same kinds of critical reasoning skills even in the absence of core drivers.
In summary, I totally understand why software developers feel threatened by AI. While it is currently able to help developer, it is also moving toward making them redundant. Not all of them, not all at once, but enough of them soon enough that we are starting to be afraid. This will potentially cause a glut of developers, and this in turn reduce the labour cost and thus income for developers.
In my estimation the probability of a good outcome like I presented in my article is actually quite low. It isn't the probable outcome. The probable outcome near as I can see is corporate control of AI leading to increasing business efficiency as people are fired. This in turn means a crisis as businesses are enriched but people impoverished. This will lead to conflict and social disruption. The rate of change might be too fast for human adaptation in the classic sense.
Why am I even posting this? Because while I believe the concern is real the answers in terms of having faith in the human spirit and capacity fundamentally misunderstands the potential of AI for utter disruption. It is scary and all hell. The story I linked to was an attempt to find a future we can aim for that is sustainable and maintains a degree of human agency, but which accepts that machines will outperform us in every way we can imagine.
My intention is not personal attack; I'm totally onboard for finding a way to find a positive way forwards for humanity.
Here's what you're getting wrong, I think: "we are not far away from machine outperforming humans in every way".
AI can perform coding task better than humans, already, but humans know what to ask for and how to apply that result. That's what critical thinking is.
In conclusion, developers won't lose their jobs to AI. They'll lose their jobs to other developers using AI.
So if you don't want to lose your job, learn how to use AI and how to test the output, as you'll do a lot of it pretty soon, if you don't already.
From my POV, what is more troublesome is an AI who learns how to learn and therefore think. When that happens, our disadvantage will be being humans, not being developers. And it's illogical to think it won't happen, IMHO.
To be clear: I thought your article was a very interesting intellectual exercise. You yourself are clearly intelligent and thoughtful. I think where you missed was implying that Dragos’ article was somehow negative or fear-driven compared to your own. In actuality, there was little comparison to be made between the two articles, yours being a theoretical discussion and Dragos’ being a practical suggestion for present times.
My web site is:
devcentre.nz/#/
I've been working on AI since I was a boy of 11. I played with very simple Eliza and wrote my own text networks, which were perhaps an early GPT (not really). Learn about AI by all means. Remember when Covid hit there was the 'learn to code' meme, which only reinforced that not everyone can code, or frankly would want to. ML and AI technology isn't as easy. I mean you can use PyTorch to train and use a model, or just use OpenAI API, but that's like driving a car vs building one. So plenty of coders will be left behind.
Me, being the monster I am, is currently building a No Code platform. The core technology is already written, and has been in use for ten years, but now I'm working on getting it ready for release as a SaaS solution.