DEV Community

Bruno Pinheiro
Bruno Pinheiro

Posted on

GitHub Copilot: great assistant, terrible architect

When the AI hype train arrived, I just observed, as I was skeptical about its actual power. I still didn’t know exactly what to expect.

A while later, AI assistants began to work their way into our IDEs, aiming to ease our work. Again, I was skeptical, but eventually I started using some tools, especially GitHub Copilot.

It’s been almost one year since I started using it daily for my development tasks. Although I didn’t use it for everything (and I still don’t), I tried using it anytime I needed help.

After this time, here are some of my thoughts on Copilot and other similar AI assistants.


Treat it as an assistant, not a code generator

Don't ask it to write something too big, like an entire class, or worse, a whole application. I like to think of it as an assistant developer who has all the theoretical knowledge available, including documentation, syntax, and code examples. You name it, Copilot has it. This is great, don’t get me wrong, but it’s not enough. Especially when it comes to architectural decisions, it doesn’t help much.

Copilot only knows what it was trained on. It doesn’t understand the particularities of your application. By particularities, you know what I mean, right? I'm talking about that piece of code or architectural decision that is fundamentally wrong, but it has to be that way in your application. As an example, I can say that I have some endpoints on the system I work on that are used to fetch some information. They should be a GET endpoint, right? But they are a POST, and they were meant to be like that.

Anyone who sees the code knows that it’s wrong (in theory), but they understand why it was done like this.

Copilot doesn’t understand things like that. Not yet, at least.

Another keyword when getting Copilot to understand you is context. When giving an instruction, the assistant will only know the context you pass to it. If you don’t pass the correct context, it will not know what you are talking about, and the output will be compromised. You know that saying, garbage in, garbage out? It applies very well here. The better your input, your prompt, the better the output.

In summary, use it for smaller tasks, on a single method, or a SQL query, for example. Something that you know what you want, so you can check if the output is correct or not.

Do NOT blindly trust the output.


Copilot teaches you about your own coding style

Just like other AI tools, Copilot is predictive. The more of your code it sees, the better its predictions. I’ve noticed that after a while, the predictions got pretty accurate, which I saw as a good thing. That means my code is consistent in its style, and Copilot started helping with good suggestions.

Part of software development is a creative process, but another part is not. It’s the boilerplate, the repetitive code, the same old recipes. That’s where Copilot truly shines, not on the creative part.

Given its predictive nature, it thrives in those repetitive tasks, like when you rename a variable and have to update it throughout the class, or when you start typing the variable type, and it suggests the variable name and its initialization. After a while, Copilot picks up your style, and the suggestions tend to get very accurate and very close to what you would write.

Being able to write less of this repetitive code and just checking and accepting suggestions is a huge win for me. I know this might seem small, but believe me, these small things do add up.

Just like any other tool, Copilot is here to make our lives easier, and it does it very well when helping with those repetitive tasks.


Trust, but verify

Remember what I said about not blindly trusting the output?

Think of a junior dev, for example. You cannot just trust that his or her code will be correct. You'll have to verify it thoroughly to make sure. Treat Copilot's output the same way. Expect it to be correct, but always verify to make sure.

Even though it has all the information about syntax, best practices, etc., that doesn't guarantee a good output. For example, you can use a badly written prompt, so you would get a bad response, or the model could just hallucinate. I'm pretty sure you heard of the hallucinations. I had Copilot suggest to me a piece of code with a method that turned out simply not to exist. Where did it get the method from? Absolutely no idea.

It's also nice to always check the response for vulnerabilities. After making sure that the output is correct, ask Copilot itself to look for vulnerabilities. Make sure the code is the best it can be before adding it to your repository.


The shift from writing code to reading/validating code

Even though Copilot and other LLMs aim to help us be more productive, what I've noticed is that I'm not really writing more code; it’s quite the opposite, actually. I'm writing less code, but I am reading a lot more.

If we shouldn't simply trust the output, that means we should read it carefully. So the more we use it, the more code we have to read.

I’m spending less time writing raw code and more time validating AI output. Does the output make sense? Is it correct? Is it optimized when it comes to performance? Could the variables be named better? These are all valid questions to ask when deciding if an output deserves to be put in your application or not.


Final Thoughts

After almost one year of using Copilot daily, I can say it has definitely earned a place in my toolbox. But as an assistant, not as a replacement.

It shines the most when helping with small, repetitive tasks, but it still needs guidance, context, and careful review.

Using it has shifted how I work. I now spend less time typing code and more time reading, validating, and improving it.

In the end, Copilot makes me a faster developer, but not because it writes the code for me, it’s because it helps me focus on writing better code myself.

Top comments (0)