DEV Community

Cover image for AI Code completion is like cruise control - and that's great news for bigger teams
Christian Heilmann
Christian Heilmann

Posted on • Originally published at christianheilmann.com

AI Code completion is like cruise control - and that's great news for bigger teams

When machine learning assisted code completion came around in the form of GitHub CoPilot and the fast-follow Amazon Codewhisperer it was impressive to see just how fast new ideas follow the Gartner Hype Cycle.

The Gartner hype cycle applied to how machine learning aided code completion is perceived. We went from 'this makes developers obsolete' to 'it is really broken' via 'this is convenient' to 'this can help us'

  • The first impression was that this would be the end of development as we know it and that it is yet another attempt of big corporations to benefit from free software and data on the internet.
  • The next step was to bash the tool and counteract the hype. People found the flaws in it and generally jumped on the "computers do dumb things without humans" bandwagon.
  • Once you used a tool like that for a while though, you start to realise that the original criticisms are largely based in not giving the system enough to work with.
  • And the more you use it, the more you realise that the "write a comment, get the code" or "start a function name, get the best of StackOverflow" scenario of using the tool is a great way to show what it does, but in reality the least powerful part of it.

I've been using GitHub CoPilot from very early on and lately also got access to Amazon Codewhisperer and I came to the conclusion that this indeed is the future of incredibly effective development. And not only for single users, but even more importantly for larger teams.

Getting used to cruise control

Lately our car just gave up and we had to get a new one. We replaced the 14 year old mechanical monster with a fancy new one with all the bells and whistles. I know I am very late to the game, but for the first time I now have a car with cruise control. And it felt weird to use it. I turned it on, the car accelerated without me doing anything and not having to step on the gas at all feels like the "driving" has been taken away from me. However, it also means I get less tired, I can concentrate more on the driving rather than keeping the legal speed and it turns out I use a lot less petrol.

This is what I found ML assisted code completion to be when you use it for a longer time. At first it feels annoying, noisy and pushy to get code recommendations on each word you write, but you soon get used to it. What is even more interesting is that whilst the first recommendations felt not at all like my code, the system soon learned how I like to write things.

Much like your mobile phone gets used to your mannerisms of texting when offering you autocompletion options, Copilot realised that, for example, I do favour event delegation to adding event handlers to everything. Even more interesting is that it recognises context.

Context recognition

I had to write some demo tables of data for a project. I wrote an HTML table and defined Name, Type, Width, Height and File Size as table header cells. I then created a JavaScript array with demo JPG files, added a reference to the table body and defined an empty string called out.

This prompted Copilot to offer me a loop over all the array items and recognised the dummy data for each data cell:

Context recognition example where copilot realised a table structure

In another example I wanted to create a "copy this" button and wrote a button element with a data attribute of snippet inside a list. By adding an event handler to the parent element, Copilot realised that I want to have an event delegation solution that checks for a button and reads the snippet data. It also inflected from the "copy" text of the button that I want to push this data to the clipboard:

Context recognition example where copilot created a 'copy this' javascript function from an HTML button with a property and the text copy

Framing ML assisted autocompletion as a "give me things from the web for my task" isn't what it is at all. Try it inside a bigger project and you will realise that the autocompletion recognises the workings of your product and offer you to call existing local functionality instead of generic, third party methods.

 Detection of reusable code can lead to an automated code standard

Now, imagine a whole team of people using a system like that and feeding it older, established products that are to the quality standards your company expects. Accepting autocompletion that feels correct and accelerates the coding process and downvoting wrong offers and making them disappear will have an amazing effect. The system can automatically collate what makes your team most effective and thus establish a code standard and reusable code library that is automatically applied. I spent months of my life trying to detect either in code of my teams and even more frustrating months trying to get people to embrace code standards and best practices. If your editor automatically offers you things that look and work great, there isn't even a discussion about this.

This is the cruise control idea. As developers we keep writing the same code over and over. Most of what we do isn't create something from scratch, but instead use already existing libraries or write the same solution in slightly different ways. This should be automated and can be using ML assisted autocompletion.

Code explanation and documentation

One of the most arrogant and false things I keep hearing from people is that their code explains itself. It would be gorgeous if that were the case. It is true that clean, structured code is easier to grasp. And yet you keep finding yourself months after you wrote a certain solution looking at it not understanding why it actually works. Writing good code comments is an art, and one that has not many people following it. One of the big "wow" things we did with ML assisted code autocompletion is that it can generate code from a comment or a schema. This is great, but I am much more excited about the inverse use case. Copilot labs and GhostWriter AI Mode have a feature that allows you to highlight a piece of code and it will explain to you in plain language what it does. This is experimental, and we need to do a lot of "nah, that's not it" for it to become really useful, but it is great to see that, for example, some CSS that is obvious to people who like to write it get a human readable explanation why there is a weird percentage in there.

Detailed explanation of what some CSS code does

Let's cruise along…

There is a lot more in this, and I will keep writing about some of my findings here, but I for one am super excited about what machine learning can do for us as developers and I think the sweet spot really is in bigger teams rather than making the individual developer more effective in copying random bits from StackOverflow as I outlined 7 years ago in The full StackOverflow developer.

Top comments (0)