DEV Community

Cover image for Continue.dev: The Swiss Army Knife That Sometimes Fails to Cut
Maxim Saplin
Maxim Saplin

Posted on

Continue.dev: The Swiss Army Knife That Sometimes Fails to Cut

First Impressions

I had low expectations from Continue.dev when I first installed it in May 2024. The VSCode extension marketplace was full of subpar coding assistants; most I tried were some pet projects, buggy and unappealing.

This one looked promising! Onboarding was easy, there was a selection of various models and providers, although I had my first suspicion it is not going to be perfect after I had to modify JSON to add the Azure OpenAI model. Yet Continue offered all major features I could expect from an AI coding assistant, such as GitHub Copilot:

  • Tab autocompletions in the editor
  • Inline instructions
  • Chat

The chat area, which I instantly moved to the right side of the screen, was actually quite good! I was surprised to see it supported codebase context along with a few other options to augment the chat (web crawl of a URL, specific files, etc.), and it actually worked!

And all of that completely free and open-source!

The Story Unrolls

Yet as time passed, sharp edges started to become apparent, and there were many of those. More on that later.

After using and observing the development of Continue, I see it as a mixed bag full of contrasts and contradictions:

  • Some features are great, some are subpar - the inconsistency in quality kept me puzzled. Nice and polished outlook; Continue happened to be half-baked.
  • The problems are both with stability (there are bugs here and there) and UX (e.g. the inline chat is awkward).
  • The core feature, coding, is mediocre. You will not be particularly satisfied with the edits suggested even with state-of-the-art models.

Unfortunately, there has not been significant progress since May.

The Essence

While the stability and UI/UX issues are something that users can potentially adapt to (and the small team behind Continue.dev can eventually fix)...

The coding part is not simple. The way Continue is built is both its strength and its weakness.

Below is a screenshot of VSCode, where you can see how I requested an inline change. At the bottom, there's the 'Output' pane with 'Continue' selected as the logs source. You can see there the exact prompt that was sent to the LLM. Try inspecting the prompts and LLM outputs in GitHub Copilot or Cursor!

Image description

Yet you can see how direct and "blunt" this approach is. Continue just throws at the LLM the piece of surrounding code that fits into the context and asks it nicely to insert a piece of code according to the user's instruction. And guess what, it does not work well.

As demonstrated in Aider's blog posts (such as this one), making an LLM update code effectively is a challenging task. LLMs often get confused, change wrong pieces, go off track if you overload them with text, etc.

There were many instances when I requested inline edits and received non-pluggable code - something I have never seen in Cursor!

And this goes further. Nowadays, we see progress in AI coding assistants that offer clever ways of utilizing LLMs.

Aider has introduced a whole bunch of tricks that made it a better coder than many others:

  • Tree-sitter integration and LLM-friendly source map as global context accompanying every LLM call.
  • A lot of experimentation with LLM "communication protocol" - there was extensive testing on how to ask an LLM properly do code patch. Eventually, the SEARCH/REPLACE protocol was developed, a notion on how to highlight a piece of code and offer an edit.
  • Multi-file edits.
  • Running lint checks, unit tests, tunneling the results into Aider to autocorrect and more...

Cursor has been very strong on both ergonomics and concepts:

  • They are doing some magic in the back-end ensuring the best pluggability of generated code.
  • They think conceptually about autocomplete and multi-line edits, fine-tuning their own models and introducing features like Copilot++. I appreciate how in one of their blogs they formulated autocomplete essence as "saving time on low-entropy keystrokes".
  • They are experimenting with shadow workspaces in an attempt to make an AI coding agent with short feedback loops and giving the LLM a perspective of a real developer, showing the LLM what a developer sees, letting the LLM build and run and get the warnings and errors a developer sees!
  • And eventually delivering a beta feature called "Composer" targeted at multi-file edits.

While Cursor and Aider look like surgical instruments, they have made great progress in sophistication and effectiveness. Continue looks like a decent hammer compared to them.

Ranking

Here's my subjective judgment, an overall ranking of various coding assistants based on ergonomics and performance (how good they are at coding), ignoring the open/closed aspects:

  • Continue 3/5
  • Sourcegraph Cody 3.5/5*
  • GitHub Copilot 3.5/5*
  • Continue + Aider 4/5
  • Cursor 4.5/5

*Note: I have not used Cody and GitHub Copilot for quite a while due to exploring new tools and options available in the market. This ranking is based on my past experience and impressions from information I came across.

My Way with Continue

VSCode+Continue+Aider is a decent tool set; here's my approach:

  • Use Aider in the terminal to do complex edits
  • Use Continue for inline super simple edits - i.e., you have written something in broken syntax and ask Continue to fix it, which works well
    • Yet nothing complex; Cursor's inline chat is way more capable and can handle more complex tasks
  • Use Continue chat to discuss a piece of code selected in the editor, sometimes augmenting it with a full file by adding it explicitly. i.e., ask questions, request reviews, etc.

Overall, Continue serves well for minor in-context tasks within the IDE, while Aider is better suited for more complex and detailed coding requirements.

Where Continue Shines / The Strongest Cases for It

  • 100% local - Cody is free, supports own models, but requires registration. Cursor is free, supports own models, does not require registration. But even if you choose your own model, Cursor will still make the LLM calls from their back-end. Continue, on the other hand, does not need to include a third party and can keep your LLM calls between the IDE and LLM API endpoint. This means that with Continue you can: use locally deployed models (e.g., via LM Studio) OR use the model hosted in your secure environment ensuring no data travels outside the predefined perimeter.
    • Continue + Aider is still 100% local, and they are both Apache 2.0 Licensed, making them free for commercial use.
  • Tinkering with Coding Assistants - if you want to create your own coding assistant, employ some sophisticated prompting techniques, add your context providers, or experiment with other features - Continue might be the best starting point with 99% of the work done, leaving you the interesting 1%. It took 10-20 minutes to clone their repo and get it running.

MORE

Screenshots, notes, observations supporting the above conclusions...

What I Liked

Suprisingly the assistant turned to be feature reach. I expected this completely open-source product to be more of a PoC/hobby product as many VSCode assistants are, yet it has most important features that a decent altenrative to GitHub Copilot needs: chat window, edit integrations, codebase as context (indexing via embeddings), etc.

  • There're plenty of providers... Yet half of them are disabled now as they didn't work :)

Image description

  • Many, many, many options for models: local, remote, open-source, commercial, OpenAI and non-OpenAI, large-slow for chat, small-fast for in-editor autocompletion (pressing TAB)

Image description

  • Not just VSCode, JetBrains IDEs are also supported (although I didn't test it) - seems like there're very few options out there

What I didn't Like

Some issues might be fixed at the time of reading!

  • The app is sometimes unstable and quirky...

    • I had indexing broken on macOS,
    • the CMD+I doesn't show the inline prompt if the sidebar with Continue is hidden.
    • clicking "+ Add more context providers" in the pop-up does nothing
  • Inline editor (CMD+I/Ctrl+I) is horrible:

    • It can take 5-10 seconds to show upon first use -> it's frustrating to call the inline editor and get nothing. There seems to be some bug with Continue lazy initialisation. If you open Continue chat the editor it is quick to show up.
    • The UI/UX is horrible, it shows VSCode's prompt at the top of the window:

Image description

Image description

  • Compare it to Cursor:

Image description

Image description

  • Inline editor sometimes (1 in 20 times) fails to provide pluggable code - i.e. if the model returns several blocks of code with text around it - not the case for Cursor, never saw inline edits failing like that

Image description

  • The editor is also kinda ugly, hard to see small "accept"/"reject" -> Cursor is a clear winner here

  • The generated diff tends to be 100% change of the selected code, yet the actual changes do not get isolated

Image description

  • There're no shortcuts to files in chat window, if a file is mentioned in the output you can't click it and get the file opened in IDE

  • Internet retrieval provider is broken. I.e. you can't ask the assistant to read the given URL and use the knowledge in the task (i.e. ask read HuggingFace dataset description page and filter out certain fields) -> the provider is mentioned in the docs, yet it doesn't work

  • I don't get it, why when brining up Chat window I always have to manually added the currently open file to the context. I select a code snippet, do CMD+L key board shortcut to bring this snippet into chat windows and don't have the whole file added to the context. Why not have it automatically added just as with other assistants? Saves time...

  • Minor, yet Azure OpenAI config is not done via UI (like other options), had to search in docs to discover it is in JSON config file -> already fixed

  • The overall concept of Models and Providers (2 separate tabs in the UI) is somewhat confusion, when adding a model I was not sure what was the difference between the 2 tabs - added more confusion.

Quick Test of Codebase Comprehension

  • List all files - mostly failed, haven't listed the most important files

Image description

  • Summarise them, get to purpose of the solution - mostly succeeded:

Image description

While the files listed where not the most descriptive, important (codebase indexer missed the core .py files), those ones identified were enough to make the right conclusion of the solution. I would place it to the level of Cody AI by Sourcegraph and below Curor.sh in this exercise.

To be clear, Continue used local embedding model which:

  • Assumed to be inferior to end hosted solution used by closed-source AI asistants
  • Can be configured and swapped with something more potent, yet haven't tried it yet

Top comments (0)