Let me say this in advance: I love Copilot. Or, to be more precise, the Copilot Plugin for IntelliJ. It lets me choose suitable models, it's a seamless workflow with accepting the changes or rejecting them. For me, the whole experience is just really joyful.
But the best thing about Copilot is not the technical aspects. It is the name. For what is a Copilot? A copilot is someone who's there to help the pilot with basically everything he can do, but he's generally not there to direct the work done during a flight. That's the pilot's job.
That's the sad thing about this whole craze to fully automate software development. Or replace developers. It's like saying "well, we have a copilot, and he can basically do the job as well so why bother having a pilot?".
It's because the copilot and the pilot complement each other, and together create a safer and better flight experience than each could have on its own. Also, what do you do when the pilot has a stroke? Or the pilot is about to make a wrong decision? That's when you need the copilot. But that's true the other way around too.
For anyone who has any understanding, even like me just a basic one, of how LLMs work know that this type of AI is not intelligent the way you or I are intelligent. But here's the thing: The whole debate about "is it more or less intelligent than us?" really misses the point. That question would only make sense if we talked about the same type of intelligence. But we don't.
We're talking about a type of intelligence which very nicely complements ours, and makes up for some of our weaknesses. Humans can't crunch large amounts of data for relations. AI can. But humans can do explicit reasoning. AI as it is constructed (the LLM type, there are actually systems who can in limited settings) can't do explicit reasoning. It has no mechanisms for it.
That is due to what LLMs do. At their core, they complete your input with the most likely output. You might be misled by the "thoughts" that AI tells you it has, but that's also just more input completion. LLMs have no explicit mechanisms for reasoning.
Of course we don't really know if humans have explicit reasoning capabilities in their brain either. But I can take completely new knowledge, that I have never seen before, integrate it into my internal world model and reason based on this explicit knowledge. What's more, I can tell you in detail the reasoning steps I took and the reasoning mechanisms I used (or at least someone trained in doing so can).
And I think why it is so dumb to try to replace developers is that for producing good software, you need both: The big data crunching for relations and the explicit reasoning. If you have just the big data crunching, sure in some cases that might be enough. But for critical software it is not enough, and many types of software we wouldn't call critical are actually pretty critical for their users. Just think of what happens to administration if their SAP system goes down. I once waited for a doctor's appointment over an hour beyond my scheduled time because they couldn't get the SAP system to work, and therefore couldn't register I was actually there for the appointment. Even for non-critical systems, a lot of people are depending on them.
Thankfully at my workplace, we are not looking for ways to replace people. And I actually think this hype that is perceivable in the media is way more exaggerated than things in the industry really are. If there wouldn't be a lot of people out there who know what they're doing we wouldn't have the generally reliable software ecosystem we all depend on every day.
What do you think of the argument? Do you agree, disagree? And why?
Thanks for reading :)
Top comments (0)