DEV Community

Mathieu Ferment
Mathieu Ferment

Posted on

One possible evolution of software development industry thanks to LLM

Since ChatGPT launch in 2022, we have been bombarded by statements and opinions regarding the future of work being altered by AI, software development included.

It is very hard to sort the crazy from the likely in this tsunami of predictions. Today I would like to add my own to this tsunami πŸ˜„ , based on previous evolutions of our industry.

ChatGPT revolutions

First of all I think it's important to clarify what text-based LLM can do and what they cannot. I'll focus on text-based LLM and ignore other LLM like Dall-E because we're interested in AI writing code. I'll refer to LLM by using ChatGPT as an example, to simplify things, but you can replace ChatGPT in my blabbering with any text-based LLM.

In my opinion, ChatGPT and its friends have brought in this world two amazing features.

The first feature is ChatGPT being able to generate relevant text. It's the most obvious one because this is the one that made it famous. After all we're referring to them as "generative AI" so it is natural we focus on their ability to generate text.

ChatGPT is capable of generating huge amounts of text, most of the time very relevant, from a given prompt. We've seen it writing books, speeches, discussions, and obviously some programming code, code that can be run. In case you don't know, how ChatGPT generates text is actually it making a prediction of what the next word should be. So if you ask ChatGPT to complete the sentence "Happy new ..." it'll probably complete it with "year" because out-of-context, the most likely word to be written after "Happy new ..." is "year". And this, ChatGPT did learn it from reading and consuming billions and billions of documents scattered all over the Internet. When it generates text, ChatGPT is "simply" (I know, I know, there's nothing simple in that but you get what I mean) using probabilities. It outputs the most probable word, one after another. If someone asks him to solve "2+2 = ?" it will write 4 not because it has computed the result but because, thanks to it reading millions of documents about mathematics, it knows the most likely answer is 4. That is how ChatGPT works: it writes text which has the highest probability to be true / relevant / what you wanted. But it's doing it in a very smart way.

That's for the first feature. But for me the second feature is 10 times more impressive.

The second feature brought by ChatGPT is its ability to infer meaning from text. For example, if I give ChatGPT the following prompt "I'm in Paris and I want to visit it, what should I do?" ChatGPT is actually able to "understand" what I said even though I was very brief. It can "understand" (this word is not accurate because ChatGPT does not perform understanding operations) that my current location is Paris city, France, that I want to tour the city, and that I am looking for places / things to see. And to achieve this it broke my sentence down into small tokens and then matched them to its gigantic database to find out the relationship between them and finally understand what I was trying to say. And it was able to link "I do" with "visit it" because it has read the entire Internet, and he has already seen these tokens or equivalent used together. ChatGPT is learning by imitation: it knows "yes" and "no" are opposite because it has read documents where the two were used in opposite ways.

In my opinion this is the real game-changer. To generate relevant text output using probabilities is something we've been doing for years, ChatGPT is simply doing it at a level never seen before. But its ability to infer meaning to sentences not even written in a specific manner is a lot more important. We had text recognition systems before, but they were limited. ChatGPT has outmatched all of them. It can even understand sentences with grammar issues in it, something a "standard" text recognition system is not able to because it was built to recognize well written text, not gibberish.

I will come back to how this ability is a game-changer later.

ChatGPT writing code

Now let's compare two scenarii:

  • in the first scenario, a skilled PHP developer will be tasked to write a PHP code script that does fetch the top ten contributors from a GitHub repository
  • in the second scenario, ChatGPT is tasked to do the same

In the first scenario, let's assume this developer never used GitHub API before. So he's going to look at the API documentation, read it, find the endpoints he can use, understand the expected payload format and response format, and then finally write the code to parse the answer from GitHub to extract the data he is looking for. It will take him between 10 and 30 minutes to write this:

<?php
require 'vendor/autoload.php';

use GuzzleHttp\Client;
use GuzzleHttp\Exception\RequestException;

function getTopContributors($owner, $repo, $token = null) {
    $client = new Client([
        'base_uri' => 'https://api.github.com/',
        'headers' => [
            'User-Agent' => 'PHP Guzzle Client',
            'Accept'     => 'application/vnd.github.v3+json',
        ]
    ]);

    if ($token) {
        $client = new Client([
            'base_uri' => 'https://api.github.com/',
            'headers' => [
                'User-Agent' => 'PHP Guzzle Client',
                'Accept'     => 'application/vnd.github.v3+json',
                'Authorization' => 'token ' . $token,
            ]
        ]);
    }

    try {
        $response = $client->request('GET', "repos/$owner/$repo/contributors", [
            'query' => ['per_page' => 10]
        ]);

        $statusCode = $response->getStatusCode();
        $body = $response->getBody();
        $contributors = json_decode($body, true);

        return $contributors;

    } catch (RequestException $e) {
        if ($e->hasResponse()) {
            $errorResponse = $e->getResponse();
            $errorStatusCode = $errorResponse->getStatusCode();
            $errorBody = $errorResponse->getBody();
            $errorMessage = json_decode($errorBody, true);
            return [
                'error' => true,
                'status' => $errorStatusCode,
                'message' => $errorMessage['message'] ?? 'An error occurred'
            ];
        } else {
            return [
                'error' => true,
                'message' => $e->getMessage()
            ];
        }
    }
}

// Example usage
$owner = 'octocat';
$repo = 'Hello-World';
$token = 'your_github_personal_access_token';

$contributors = getTopContributors($owner, $repo, $token);

if (isset($contributors['error'])) {
    echo "Error: " . $contributors['message'] . "\n";
} else {
    echo "Top 10 Contributors:\n";
    foreach ($contributors as $contributor) {
        echo $contributor['login'] . " (" . $contributor['contributions'] . " contributions)\n";
    }
}
Enter fullscreen mode Exit fullscreen mode

In the second scenario, ChatGPT is given the same assignment and prompt the exact above solution in 10 seconds.

Is it code or..?

Sounds impressive, right? 10 seconds instead of 30 minutes? But let's not pretend otherwise: we all know here πŸ˜„ on dev.to that this simple task is not representative of a developer's work. Most developers don't get to write code from scratch everyday, most developer missions have a lot more context and constraints, so we all know it was a very simple scenario and ChatGPT being 60 times faster than a human on this case means nothing regarding ChatGPT ability to replace developers.

That is not what interests me here. Let's look at the code that was written.

In this code, some PHP code has been written, it uses the popular PHP HttpClient Guzzle, and it also uses PHP built-in JSON parsing function json_decode() to parse the output before writing the desired answer.

In the first scenario, did the developer really write some "PHP code"... or has he been rather using multiple things and gluing them together to obtain what he needed?

What I mean is that in the first scenario, the developer did not write code to send raw HTTP requests. He also did not write code a computer can understand: he wrote PHP code that was compiled down to bytecode before being interpreted by the Zend engine. And he did not write the logic to transform a JSON string into PHP-usable data: he used PHP built-in JSON parsing capabilities. In fact the code he wrote is mostly acting as a carrier, using one piece, fetching the output, transforming it a little, then passing it to another piece that returns another output that is then given to another piece for another operation before finally outputting something. This is what I call using multiple things and gluing them together to obtain what I need and... that is what most developers do today.

Unless you're working in a specific industry, chances are high that everyday the code you are writing is glue. Glue that uses code and systems (frameworks, libraries, applications...) written by other developers, that you can leverage to build something else. And your code is mostly passing things in and out from these multiple systems to obtain the behavior you wanted to achieve for the end-user. This is typical of a modern code project.

So ChatGPT has not written code either. What he did was glue multiple things in order to obtain what I asked him for, exactly like the human developer.

The evolution of code

Alan Turing created one of the first computers in 1947, and it ran punch cards. Punch card were used as data storage for its computer. Can you imagine how tedious it must have been to write a program using punch cards? You had to translate what you wanted to achieve into operations that could be implemented by the right holes in a card. A very different method that the ones we use today.

Punch cards were later replaced by vacuum tubes and magnetic tape, much more efficient. A lot better, but I guess "writing code" at that times was still tedious and very time-consuming. Still no keyboard or screens, still no IDE or cloud.

Fast-forward we get then the first functioning programming languages in 1950 with machine code and Autocode. Then came FORTRAN and a lot others until we got modern languages such as PHP in 1990.

Each of these programming languages is an abstraction. At the end, the computer can only run machine code. We have created plenty tools (compilers, interpreters...) that transform our fancy programming languages into machine code, because fancy programming languages are much easier and faster to write.

This is why we create programming languages: because writing

foreach($data as $item) { echo $item->id; }
Enter fullscreen mode Exit fullscreen mode

is so much easier and faster than writing the equivalent Assembly code

section .data
    data_items dq item1, item2, item3, 0 ; Array of pointers to items, terminated by 0
    format db "%d", 10, 0                ; Format string for printf: "%d\n"

section .bss
    item1 resq 2 ; Reserve space for item1 (id, next)
    item2 resq 2 ; Reserve space for item2 (id, next)
    item3 resq 2 ; Reserve space for item3 (id, next)

section .text
    extern printf
    global _start

_start:
    ; Initialize items
    mov rax, 1
    mov [item1], rax
    mov rax, 2
    mov [item2], rax
    mov rax, 3
    mov [item3], rax

    ; Point to the start of the data_items array
    mov rsi, data_items

.loop:
    ; Load the current item pointer
    mov rdi, [rsi]
    ; Check if we've reached the end (NULL pointer)
    test rdi, rdi
    jz .done

    ; Load the id from the current item
    mov rax, [rdi]

    ; Print the id
    mov rdi, format
    mov rsi, rax
    xor rax, rax
    call printf

    ; Move to the next item
    add rsi, 8
    jmp .loop

.done:
    ; Exit the program
    mov rax, 60
    xor rdi, rdi
    syscall
Enter fullscreen mode Exit fullscreen mode

(I have no idea how to write Assembly, I asked ChatGPT to write this one)

Prompts are an abstraction

If I write the PHP code

foreach($data as $item) { echo $item->id; }
Enter fullscreen mode Exit fullscreen mode

then it will be compiled / interpreted / transformed into opcode resembling the Assembly code above. I wrote something, I wrote the behavior I wanted to happen, the input and PHP and the Zend Engine can convert it something the machine can run, the output.

If we take a step back, we might notice that it is somehow similar with asking ChatGPT "Please write me some PHP code that fetches the top ten contributors from a GitHub repository" to ChatGPT, and having ChatGPT transform this input into PHP code that can be run, an output?

Just like using to knowledge of PHP I can write the same thing with 1 line of PHP versus 50 lines of Assembly, likewise I can write the same PHP code by myself in 30 minutes or in 10 seconds with ChatGPT. And the reason ChatGPT can do this is because

  1. It has this amazing capability to infer meaning from my prompt
  2. It can use the knowledge learnt from billions of documents to generate the right PHP code, implementing the behavior I want to happen
  3. ChatGPT is not writing code, he's doing a developer's work which is using multiple things and gluing them together to obtain what he needs.

I like to imagine ChatGPT as the next iteration of a compiler. But instead of converting programming code into machine code, it converts text into code. Or better than text: behavior. Intent. The general principle remains the same: I am responsible for describing what I want to happen using a specific syntax (human speech for ChatGPT, PHP for the Zend Engine) and it is able to translate it into code that can, at the end, be run by a computer. ChatGPT is an abstraction layer.

A possible future, what happens to developers

ChatGPT is impressive if you ask him to write some code from scratch because there is no context. But give it a medium-sized codebase and it's completely useless (as of today), because there is way too much things to analyze. ChatGPT does well with small prompts, and codebases are not small prompts. So for now developers can only rely on ChatGPT to write small chunks of code, well isolated. It's better than nothing, but we're far from being able to stop writing the code ourselves.

Also remember that ChatGPT is using the knowledge he gained from billions of documentations and code examples from the Internet (including GitHub). Meaning: the code he writes is sometimes... garbage. Not because he's stupid, or smart, he's neither: he's simply replicating what he has learnt. So ChatGPT developer skills are probably the "average" programmer skills, meaning it will (just like all of us) make mistakes that become bugs.

But let's imagine that AI companies are able to solve / mitigate these two issues. What would become of us, developers, who are being paid for our ability to read and write code?

When FORTRAN was released in 1954, I can imagine that at the beginning there was 2 groups of developers. Some decided to stick with what they knew: Assembly. They were skilled at it, they were able to write very optimized code because it was very low level, and they did not understand the benefits of using FORTRAN. Early versions of FORTRAN also had issues when compiling the code into machine code, making Assembly developers dubious of it. But the abstraction capability of FORTRAN became a catalyst that allowed the developers who decided to use it to write programs much faster and more complex than before... and history teaches us FORTRAN emerged victorious. At this time, most developers, I guess, stopped writing Assembly code, and started using only FORTRAN. They only used the abstraction, not the layer under.

I think we might be in a similar situation. Our "modern" programming languages, which are still fancy line-by-line instructions for the computer, are the equivalent of Assembly. To write a program, we split its behavior into small operations that become instructions that we write, one by one. Just like the first developers used to create punch cards, one by one. ChatGPT prompts could be the equivalent of FORTRAN. With a programming language, we have an accurate capability to dictate exactly what we want to be run. But with the abstraction of ChatGPT, we can obtain the same, less controlled, less accurate, but so much faster and high level. Today we continue to use ChatGPT to help us write code, I think at some point we'll simply stop writing the code and let ChatGPT writes all of it, taking care of the implementation details.

In the future we might have conversations like this:

"Hello John, I'm back from a meeting with the customer. I wrote down two new features he'd like us to build next week. How are you doing on this week mission?

  • Well, I finished designing the API component. I had to adjust a few prompts I did for the new usecase, but the API component is now ready. Testing is finished and complete. I will then design the UI component - I think 20 prompt iterations will be enough, it should be ready before tomorrow.
  • Did you see that ChatGPT 23.3 has been published? They introduced something like prompt-based cloud hosting. Prompts are now fit with a specific architecture to make it easier to deploy with prompts. They call it infrastructure as prompts. Also they fixed a few exotic usecases where prompts were being corrupted when dealing with databases too large.
  • Cool, I will check that this afternoon. We should really migrate our current applications to ChatGPTCode 23 as soon as we can, the version 22 is EOL at the end of the year and then it'll become harder to re-run past prompts and obtain the same code."

In this imaginary scenario, the developer is not writing code directly anymore. Instead, when being given a new feature, he transforms the feature into a list of changes to be done in the codebase, and each change is implemented by providing the right instructions to ChatGPT. The right prompts.

Even if one day ChatGPT is able to "digest" a codebase of millions of lines, I think it will be inefficient to ask it to maintain it. My feeling is that it will be much more efficient to ask ChatGPT to write small, configurable and reusable components that can be connected together. A database component, a UI component, a session component... small components that ChatGPT can easily create and maintain (we need ChatGPT to be able to alter them later for new needs) and that can be coupled together to build an end-user application. Maybe a first candidate for ChatGPT-generated components would be applications built on AWS Lambda functions? They might be small enough to be entirely digested by ChatGPT.

Conclusion

The first developers "wrote programs" using punch cards. Later developers "wrote programs" in machine code, describing computing operations one by one. Further developers "wrote programs" using programming languages. Each time, we have added an abstraction level, pushing us farther from bytes operations, memory swaps and buffers, and focusing more on the end result and less on the implementation details. These abstractions have been tools allowing developers to build things more and more awesome. ChatGPT might be the next addition to these abstractions.

If you're worried about what'll happen to your job as a developer, don't. The code that will be run by ChatGPT will still run on a computer. It will need CPU and memory, it will access the network, it will read and write from a disk. People who understand how code is run on a computer will always be needed, to design it well. Asking ChatGPT to write code sounds easy, asking ChatGPT to write enterprise-level code that runs efficiently and securely in the long term is a job. It will need computing skills and logic, just like it does today.

Top comments (0)