DEV Community

Cover image for Augmented Programming with GitHub's Copilot
G.L Solaria
G.L Solaria

Posted on

Augmented Programming with GitHub's Copilot

You may or may not have heard of the new GitHub project called Copilot https://copilot.github.com/.

Copilot helping to retrieve tweets in JavaScript

It aims to use AI to augment the abilities of a human coder and is hence associated with the catch phrase "Your AI pair programmer". It appears that it is currently in technical preview and you need to sign up to participate in the preview.

I haven't actually tried it yet but I can definitely see the promise in augmenting the abilities of coders. The potential for speeding up coding time would be extremely valuable. I however have a few concerns about this technology.

One problem I have is the ability to inject code that may not even compile! If you are lucky enough to be working with a compiled language, then that may not seem like such a a big deal. Now consider the problem when applied to dynamic languages such as JavaScript. Code can be injected that the human coder may not even understand (which I call the Ignorant-StackOverflow-Coder-Use-Case). To ensure correctness of the code, the human coder needs to perform extensive testing (hopefully in automated tests). But how can a human coder implement tests for code they may not even understand?

Another problem I have is traceability. How can a code reviewer see the extent of Copilot written code versus human code? This could be overcome with tight use of a version control system to some degree (e.g. special commits of internal libraries encapsulating injected code) but this would create annoying overhead. Perhaps Copilot can provide a way for a code reviewer to highlight the augmented code but like I said I haven't previewed the technology yet.

Related to traceability, one of my concerns is the ability for Copilot to systematically inject bugs. If a bug is found with code the machine learning algorithm has been trained with, how can the bug be traced? Identifying the training fragment is probably very difficult given my limited experience of machine learning and may involve many code fragments. So I don't know how feasible it would be to solve this problem.

The traceability problem actually works both ways - how can a bug found in injected code at testing time be flagged back into to the machine learning algorithm? This feedback loop could expose the training and testing of the machine learning pipeline to adversarial attacks so again I am not sure of the feasibility of solving this problem. https://en.wikipedia.org/wiki/Adversarial_machine_learning

Even worse and related to my concern about bug injection is the ability for it to pervasively inject security vulnerabilities into code. It would be relatively easy for a bad actor that knows about a zero day to search for matching code and exploit other code bases.

Ideally I would like to see features that allow a code reviewer to trace all Copilot injected code and diff it with significant training source fragments. I would also want the ability to show me the bug history of code related to the injected code and any existing automated test code. I think broadly speaking, we need to use the same criteria for injected code that we use for 3rd party libraries. We carefully evaluate a third-party library for license compatibility, reputation of the contributors, depth of community support, and code coverage of automated tests, and I think the same should apply to augmented code.

So I guess you are getting the feeling that I have my reservations about this technology but I will save my judgement until I actually use it! It could be the new StackOverflow. Now don't get me wrong - StackOverflow revolutionised they way we code but it is a double edged sword that can only reap benefits when it is used in an informed manner.

Top comments (2)

Collapse
 
thorstenhirsch profile image
Thorsten Hirsch

I don't understand your concerns. The programmer is responsible for all of his code, no matter if he wrote it himself, copy&pasted it from SO, or injected it with Copilot:

  • he needs to understand it
  • he needs to test it
  • he needs to debug it

Good programmers might improve their speed in early stages of projects, when a lot of simple code can be written. But at later stages Copilot won't be of much use. And bad programmes will likely produce more bad code in a shorter time, but Copilot won't make them good programmes.

I see a bigger issue in the copyright of the injected code, because it lets developers skip the (legally mandatory) step, in which he/she has to check the license.

Collapse
 
glsolaria profile image
G.L Solaria • Edited

I think my criticisms come down to how much code it actually injects. If it is just an advanced code completion technology then I agree with you. If however it is injecting more than say 4 to 5 lines of code then I stand by my concerns.

Also if it is targeted at training new coders then I would like to know what code is injected to make sure they actually understand the code and have tested it appropriately.