I've been using GitHub Copilot with our production codebase for the last 4 months, and here are some of my thoughts:
The Good:
Explains Complex Code: Itโs been great at breaking down tricky code snippets or business logic and explaining them properly.
Unit Tests: Really good at writing unit tests and quickly generating multiple scenario-based test cases.
Code Snippets: It can easily generate useful code snippets for general-purpose use cases.
Error Fixes: Copilot is good at explaining errors in code and providing suggestions to fix them.
The Not-So-Good:
Context Understanding: Itโs hard to explain the context to a GenAI tool, especially when our code is spread across multiple files/repos. It struggles to understand larger projects where changes are required in multiple files.
Inaccurate Suggestions: Sometimes it suggests installing npm libraries or using methods from npm packages that donโt exist. This is called Hallucination, where AI-generated code looks convincing but is completely wrong.
Complex Code: Occasionally, the code it generates is confusing and complex, making debugging harder. In those moments, I wish I had written the logic myself and let Copilot check for errors or bugs.
Overall, GitHub Copilot has been a useful tool, but it has its quirks. When using large language models, the responsibility always stays with the programmer.
Top comments (0)