DEV Community

Cover image for Which Code Assistant Actually Helps Developers Grow?
BekahHW
BekahHW

Posted on • Edited on • Originally published at bekahhw.com

Which Code Assistant Actually Helps Developers Grow?

#ai

Over the past year, we’ve had a ton of conversations at Virtual Coffee about AI. If you’ve opened up X or LinkedIn, you probably realize that people have very strong opinions about AI. At Virtual Coffee, we’re a pretty close-knit community, so there are a lot of concerns about the impacts of AI, how junior developers grow (or stay stagnant) with AI, whether or not to adopt it as a team, and whether to use it without telling your boss. At the heart of a lot of these conversations is the feeling that you’re somehow “cheating” if you use AI, and that you won’t learn or grow if you’re using it. I think the sentiment comes from the right place, caring about people, but I think there are a lot of options and approaches you can take to prevent that. I believe that when we consider the evolving landscape in tech, we also need to think about the changing landscape of tech education. Most of us will end up using AI in our workflow, either out of necessity or because our team mandates it. That’s why I also think that AI coding assistants actually have the ability to help everyone grow in ways they couldn’t before.

ykdogo tweet

Learning vs. Speed Trap

Your approach to learning with AI assistants definitely matters. And not all AI coding assistants will help increase developers' coding skills. Learning about the code you’re writing, how your team approaches problems, and how to utilize AI as part of your workflow is necessary to grow in tech. Teams don’t need someone who can prompt their way to a working solution but can’t debug it when it breaks. Most AI coding assistants are optimized for speed and not for learning. They're designed to get you from problem to solution as quickly as possible. If your goal is skill development, you should think of AI adoption more like onboarding a mentor, rather than replacing you.

Context-Aware Guidance

The most effective learning-focused assistants should understand what you're trying to do and what you should learn from doing it. They highlight patterns, point out potential issues, and suggest alternative approaches that might teach you something new.

Today, I’m testing out a couple of AI coding assistants on a new feature. (I’m interested in doing a follow-up post that uses it on an existing file. Let me know if that’s something you’d like to see!) I tested out each coding assistant on a new feature I’m adding to my writing site, with this simple prompt:

I want to create a game for this site, where people (not logged in) can add a word to a story. Once the story hits 300 words, it locks the submission. No one should be able to submit more than 3 words in a row.

Here are my takeaways for Continue, GitHub Copilot, and Cursor. I gave the same initial prompt for each of them.

Continue

Continue stands out here because their philosophy explicitly addresses the learning problem. Their documentation talks about "amplifying intent" rather than replacing it, and they specifically warn against becoming a "human clipboard." You can explore ideas through “vibe coding” during creative phases, but when it comes to production work, they emphasize that developers need to stay in control. It’s open source, model-flexible (bring your own LLM), and encourages creating custom assistants that reinforce your team’s coding standards. For teams focused on growth and increasing their developers’ coding skills, Continue offers both transparency and control.

Before giving me any kind of code, it gave me an initial planning response, outlining a step-by-step approach to the user story. Seeing the plan helps the user to walk through the task with Continue.

Overview of the approach Continue takes to implement feature

After this, it provided commented code along with a reusable template. The inline comments break down some of the logic, and helps the user understand the process it went through to generate the code. After the code, it summarized all the changes it had made.
You might notice it mentions Flask, but this is actually an Astro app. That was my mistake. When I initially set up the assistant, I had configured it with Python-focused instructions and directed it to Python documentation, since my rules were originally written for a Python assistant. Once I updated those settings and specifically shared my repository with the assistant, Continue was able to properly follow my project's styling conventions and leverage the existing components.

Lastly, it gave both an overview of what was implemented and ideas for improvement. I appreciate that it provided more context about its approach, commented code throughout the new file, and offered inspiration for my next iteration.

Overview of the approach it took and the improvements I could make

Without having to ask further questions about the decisions it made and the approach it took, I think it provided a good amount of context for the developer to understand the process.

Copilot

GitHub Copilot is known for excelling at code completion and includes agent-based features. It can speed up repetitive tasks, but the learning tends to happen through osmosis. You need to recognize the patterns in suggestions and might pick them up over time. You have to be more active with your learning by asking Copilot about the decisions it made.

Copilot’s approach was a lot different from Continue’s. It jumps straight into code generation without context-setting or explanation of its approach.

It did provide a “wrap up” after the code, but it wasn’t as thorough or complete as what I got from Continue.

Wrap up of the implementation

For what it’s worth, Copilot also told me that creating a Svelte component was the best option, and then, when I questioned it, Copilot told me that Astro was actually the best path. It was flexible with the approach once I questioned it, but it definitely required me to go down the rabbit hole with it. Learning is definitely passive with Copilot.

Cursor

Cursor offers an AI-first editor experience with agent modes, but their emphasis on "AI that builds with you" raises questions about how much actual building the developer is doing during complex tasks. Although I’m just doing a basic test for this post with a brand new feature, I did experiment a bit with its interactive, AI-native IDE experience by highlighting some of the code it generated and asking, “What does this do?” I plan on doing more of that in a follow-up post for comparison.

After being given the same prompt as Continue and Copilot, Cursor walked me through the approach it was taking and included what files it looked at to get there. However, it automatically created a Svelte file for the game (I did have Svelte installed in the project), and I had to do a lot of back and forth with it to understand the decisions it made and why. I’ve actually never used Svelte, so this was something I had to dig into deeper to understand.

initial overview of the approach

One of the things I appreciated about Cursor’s experience was that it explained piece by piece and required me to accept changes. That forces the user to think about what’s being implemented. It also auto-corrected some of its own errors, which is a good learning opportunity to see how it debugs.

explaining where the issues lies in the error

I wish it would have automatically added code comments throughout, but the explanations were valuable. The takeaways at the end walked through what the user could do and the functionality. It was more thorough than Copilot, but I liked the improvements suggestion that Continue had.

wrap up of features and how it works

Building a Learning-First AI Strategy

I think this exploration is important for new folks coming into tech, and for teams serious about using AI to help their team grow and not just ship faster. The path here should:

  1. Start with Intent: Before adding any AI tool to your workflow or your team’s, clearly define your goals. Are you trying to help yourself or junior developers on the team understand architectural patterns? Learn a new framework? Improve code review skills? Different learning goals might call for different AI approaches.
  2. Choose Tools with Teaching DNA: Look for AI assistants that were designed with education in mind, not just productivity. Continue's emphasis on amplifying developer intent rather than replacing it is a good example of this thinking.
  3. Implement Learning Safeguards: Whatever tool you choose, build processes that encourage learning by requiring explanations for AI-generated solutions, having regular code review focused on understanding, not just correctness, adding pair programming sessions where AI suggestions become discussion points, documenting decisions and trade-offs.
  4. Measure Learning, Not Just Output: Track whether your developers are asking better questions over time, not just whether they're closing tickets faster. Are they suggesting alternatives during code review? Can they debug issues in AI-generated code? Are they learning patterns they can apply without AI assistance?

We want developers who are using AI to understand problems. That's the difference between an AI assistant and an AI mentor.

Wrapping Up

I think the best AI coding assistant for individuals and teams focuses on developer skill growth. Based on philosophy, approach, and explicit focus on learning, Continue seems to understand this distinction better than most. But the tool is only part of the equation. The bigger part is approaching AI adoption with learning as the primary goal, not just productivity.
The most productive teams and developers understand what they're building, whether with or without AI help.

Top comments (12)

Collapse
 
skamansam profile image
Samuel

Im curious to see how cascade (windsurf) stacks up against these. I would also like to see how different backend work as well (Claude, SWE, etc). I have yet to see a really comprehensive comparison, and only ever seen cascade in a comparison once. (I am a huge fan of cascade, having tried a bunch of others and cascade is still the only one that works with me instead of against me.)

Collapse
 
bekahhw profile image
BekahHW

Definitely planning on adding windsurf in a future post!

Collapse
 
dotallio profile image
Dotallio

Totally agree that AI assistants shouldn't just give you code, but help you really understand the 'why' behind it - seeing an assistant act like a mentor instead of a shortcut makes all the difference for growth.

Anyone else found specific prompts or habits that make AI tools more helpful for genuine learning?

Collapse
 
bekahhw profile image
BekahHW

I think building the conversation into your work with it is super helpful. For example, just asking "why did you take that approach?" or "Is there a better approach to take?" A while back, I used chatgpt to help me with a feature for a personal project that I wanted to get out really quickly. It got the job done, but it used an outdated method. But I had to know that it was outdated to make the fix. That's one of the reasons I like Continue so far. I have the latest docs linked to my assistant.
Docs linked to astro

Otherwise, it's super important to cross-check the approach with the docs.

Collapse
 
nevodavid profile image
Nevo David

pretty cool seeing someone dig deeper into what actually helps folks learn - you think it’s more about the tools or how we use them over time?

Collapse
 
bekahhw profile image
BekahHW

I think it's both, for sure. I was on an airplane once where they tried to use a wrench to pound in something that was loose on the door...Thankfully, they deplaned us bc that was definitely the wrong tool for the job.

You have to have the right tool for the problem and you have to know how to use it.

Collapse
 
xwero profile image
david duymelinck

If you had to make a similar feature, how much would you remember from this time?
I'm asking because all of the noise about using AI making people less knowledgeable. Are you really learning or are you always going to need AI as a crutch?

Because of all the fuzz, I looked at my own patterns and I found that i relied on CLI autocomplete and memorizing search terms to find the solutions I needed.
I'm really thinking why I don't know those things after doing them many times before.

Maybe that is one of the reasons I haven't commited to an AI workflow when coding. I'm afraid to unlearn too much.
I use AI when I'm stuck, and then I do learn something because the decision to add it is more mine, if it makes sense.

Collapse
 
bekahhw profile image
BekahHW

I think we see this all the time. We have tools to allow us to move faster. I think any engineer who's done an interview where they make them code in notepad realizes how much they depend on autocomplete or prettier or extensions that do things for you allow them to move with confidence.

But they still need to know how to ask the right questions and to debug, bc that's not going to change. This is why you can't rely on AI. You still have to know what you're doing or you're not going to be able to fix it when it breaks. I think if you're deliberate about making sure you're still learning and that you're still questioning the AI implementation, then you'll have the speed of AI and the depth of someone who took the time to learn.

Collapse
 
xwero profile image
david duymelinck • Edited

That is all well and good for me, but I was thinking about people that start in IT now and their path to know enough what they are doing.

In the times of stack overflow you had to search for code and copy paste it. Now faulty code is written for you with breakneck speed.

As an experiment I build a CLI code generator to bootstrap classes. I was amazed about all the questions I had to make it ask to cover eighty procent of the use cases. With AI a lot of those questions you don't need to ask because it assumes them for you.
So people with little experience don't know they are decisions that have been made for them.

I feel AI is like an experienced developer that gives you directions, but doesn't explain the whole reasoning. Because AI can't as far as I know.
For the people who are better at visual comparisons I think of the how to draw a owl meme, where the first drawing are some circles and the second drawing is a detailed owl.

Thread Thread
 
bekahhw profile image
BekahHW

One of the most important parts of learning to be a developer is learning how to ask questions. But, tbh, that's probably true for a lot of professions. I taught college English for 10 years before coming into tech, and my main goal was to teach my students to think critically, listen, and be able to ask good questions, questions that challenged their own beliefs.

I think there are going to be plenty of people that move super fast and look like they know what they're doing bc of AI, but I think at some point, they're going to hit the wall and won't be able to progress or might not even be able to maintain a job.

The example in this post is for a new feature, and it's not a complex one. Using Ai Coding Assistants on an existing codebase will help to show those flaws. When I graduated BootCamp, I had never worked on a large codebase. I worked on my own projects with a handful of files. When I got my first job, I was so overwhelmed by the huge codebase and trying to figure out what was happening where. I do think having a coding assistant would have helped me to navigate the complexity. But, I also think that if I had depended on AI to do the work for me, I would never have been successful in that role.

Collapse
 
0x2e73 profile image
0x2e73

Up until a few months ago, I was fully in favor of using AI for development. I thought I had a healthy approach: I wasn’t using prompts to generate full code, I always reviewed the suggestions, and I had Copilot running quietly in the background just to help speed things up a little.

But after a few months, I realized that we almost always end up overusing it—even without noticing.

This really hit me when I took an entrance exam for a computer science program. The test was on paper, no computer allowed. And that’s when I got a wake-up call. It took me way longer than expected to write out my algorithm. It was like the code didn’t come naturally anymore, like I had unlearned how to think through it on my own.

That’s when I decided to completely turn off Copilot. Now, I also try to use ChatGPT as little as possible for tasks I can handle myself. I mainly rely on it to review my code or suggest improvements—but I want to stay sharp, keep my brain active, and continue growing through my own efforts.

Collapse
 
meligy profile image
Meligy

I wonder how much of the effect is the agentic tool itself and how much is the model. For example, would Copilot get you different results when using a Claude model? Sure, but how different?

Some comments may only be visible to logged-in visitors. Sign in to view all comments.