<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Anita Ihuman  🌼</title>
    <description>The latest articles on DEV Community by Anita Ihuman  🌼 (@anita_ihuman).</description>
    <link>https://dev.to/anita_ihuman</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/anita_ihuman"/>
    <language>en</language>
    <item>
      <title>How Developers Can Sync Notes and Tasks Between VS Code and Notion</title>
      <dc:creator>Anita Ihuman  🌼</dc:creator>
      <pubDate>Wed, 04 Feb 2026 16:10:08 +0000</pubDate>
      <link>https://dev.to/anita_ihuman/how-developers-can-sync-notes-and-tasks-between-vs-code-and-notion-1cdp</link>
      <guid>https://dev.to/anita_ihuman/how-developers-can-sync-notes-and-tasks-between-vs-code-and-notion-1cdp</guid>
      <description>&lt;p&gt;I used to lose tasks in the most embarrassing way.&lt;br&gt;
Not because I didn’t write them down, but because I wrote them everywhere. A quick TODO in a code comment. Short summary in Google Docs. A meeting note in Notion, a follow-up buried in a terminal history. By sprint review, something always slipped through.&lt;/p&gt;

&lt;p&gt;The problem wasn’t discipline. It was fragmentation.&lt;br&gt;
As a developer advocate, I spend a lot of time in VS Code. But my team lives in Notion. Every task that started in my editor had to make a second trip into a Notion board before it became real. That extra step rarely happened immediately, and when it didn’t, context was lost.&lt;/p&gt;

&lt;p&gt;I wanted one thing: to capture notes and tasks where I work but store them in the place where my team collaborates.&lt;/p&gt;

&lt;p&gt;I recently found out I can also integrate Notion with &lt;a href="https://www.continue.dev/" rel="noopener noreferrer"&gt;Continuous AI&lt;/a&gt;, and I will show you how to achieve this in detail in this article. &lt;/p&gt;
&lt;h2&gt;
  
  
  The cost of Context Switching
&lt;/h2&gt;

&lt;p&gt;Switching between tools feels harmless. But it adds up.&lt;br&gt;
During a typical workday, I would:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Write a meeting summary in VS Code&lt;/li&gt;
&lt;li&gt;Switch to Notion&lt;/li&gt;
&lt;li&gt;Recreate tasks manually&lt;/li&gt;
&lt;li&gt;Forget half the nuance I had while writing the note&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Sometimes I skipped the Notion step entirely and told myself I’d “add it later.” Later never came.&lt;br&gt;
The result was predictable:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Missed follow-ups&lt;/li&gt;
&lt;li&gt;PR reviews are delayed by “I thought someone else was handling that.”&lt;/li&gt;
&lt;li&gt;Notes that never turned into action&lt;/li&gt;
&lt;li&gt;I didn’t need another task app.&lt;/li&gt;
&lt;li&gt;I needed less friction between thinking and tracking.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Why Continue and  Notion works
&lt;/h2&gt;

&lt;p&gt;As a developer, Continue already lives in my VS Code workflow. I used it to reason through problems and draft documentation. What I didn’t realize at first was that Continue could also talk directly to external tools like Notion.&lt;/p&gt;

&lt;p&gt;With a simple Notion integration and an API key stored in my terminal session, Continue can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create Notion pages&lt;/li&gt;
&lt;li&gt;Insert tasks into databases&lt;/li&gt;
&lt;li&gt;Update statuses, assignees, and due dates&lt;/li&gt;
&lt;li&gt;Do all of it using plain English prompts&lt;/li&gt;
&lt;li&gt;No copy-paste. No tab switching. Just streamlined focus.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Setting it up
&lt;/h3&gt;

&lt;p&gt;The beautiful thing is that setting it up is straightforward and only needs to happen once.&lt;/p&gt;
&lt;h4&gt;
  
  
  Prerequisites
&lt;/h4&gt;

&lt;p&gt;To integrate Continue and Notion, you will need the following setup:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://docs.continue.dev/cli/overview" rel="noopener noreferrer"&gt;Continue CLI&lt;/a&gt; installed (npm i -g @continuedev/cli)&lt;/li&gt;
&lt;li&gt;A &lt;a href="https://notion.so/" rel="noopener noreferrer"&gt;Notion workspace&lt;/a&gt; with Editor (or higher) access&lt;/li&gt;
&lt;li&gt;Node.js 18+ installed locally&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://hub.continue.dev/" rel="noopener noreferrer"&gt;Continue account&lt;/a&gt; with &lt;a href="https://hub.continue.dev/hub?type=configs" rel="noopener noreferrer"&gt;Hub access&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Note: Agent usage in Continue requires credits. To use agents, create a &lt;a href="//hub.continue.dev/settings/api-keys"&gt;Continue API key&lt;/a&gt; and store it as a secret.&lt;/p&gt;
&lt;h4&gt;
  
  
  Step 1: Create a Notion Integration
&lt;/h4&gt;

&lt;p&gt;In your browser, sign in to Notion, go to the Integration’s page (Settings - Connections - Develop or Manage Integrations)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkd7b29gmoki57p2zluin.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkd7b29gmoki57p2zluin.png" alt=" " width="800" height="348"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click New Integration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqr1g5pu5y13hvlklaguk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqr1g5pu5y13hvlklaguk.png" alt=" " width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Give your integration a name (like "Continue Integration"), select your workspace, then click Save. You will be prompted to go to the Integration configuration, click on it.&lt;br&gt;
Enable:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Read, Update, and Insert under Content Capabilities&lt;/li&gt;
&lt;li&gt;Read comments under Comment Capabilities.&lt;/li&gt;
&lt;li&gt;Read user information, including email addresses, under User Capabilities&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4tmyqaa3mzt3rz16xgdr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4tmyqaa3mzt3rz16xgdr.png" alt=" " width="800" height="337"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Make sure to copy and safely keep the Internal Integration Secret.&lt;br&gt;
Additionally, in your Notion homepage, you can open any page or database you would want your integration to access. &lt;/p&gt;

&lt;p&gt;Navigate to the Database page and select the three-dot icon in the top-right corner. Under Connections, search for your specific connection, select it, and click Confirm to complete the integration. This lets your integration read and write to those pages.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcpwasm2zab7adnv7ci81.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcpwasm2zab7adnv7ci81.png" alt=" " width="800" height="331"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Step 2: Configure Notion key in your Environment with VS Code
&lt;/h4&gt;

&lt;p&gt;Open the terminal in your VS Code and navigate to your project directory. Enter this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
export NOTION_API_KEY="secret_xxx"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace “secret_xxx” with the Internal Integration Secret you copied. This makes the key available in the current session. You can save the key in a .env file for easy access, but make sure to add .env to your gitignore to keep it private.&lt;/p&gt;

&lt;p&gt;Next, if you have a specific database (like a “Tasks” list), find its ID by opening the database in Notion and looking at the URL (it’s the long string after the last / and before ?). Then run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
export NOTION_DATABASE_ID="your_database_id"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This tells Continue which database to use if a prompt refers to “my database.”&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Continue will automatically use the NOTION_API_KEY you set when you run it. You don’t need to manually call the Notion API – just reference “the API key stored in this session” in your prompts.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 3: Link and Test Notion in Continue CLI
&lt;/h4&gt;

&lt;p&gt;In the terminal (in your project directory), type cn and press Enter. This launches Continue in interactive (TUI) mode. You should see a prompt where you can type instructions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwwr46i9ffcyqsfh3807e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwwr46i9ffcyqsfh3807e.png" alt=" " width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Try a prompt, for example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1. Fetch my Tasks database using the Notion API key stored in this terminal.
2. Create a new task named "Buy groceries" with status "To Do".
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Press Enter. Continue will list the actions it plans to take (e.g., creating a new database entry) before doing them. Review and approve each action. This will create a new item in your Notion database. &lt;/p&gt;

&lt;p&gt;Also, if you trust your prompt and want to skip confirmations, you can run a one-liner like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cn -p --auto "1. Fetch my Tasks database. 2. Create a new task 'Write report' with status 'In Progress'."
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;-p&lt;/code&gt; flag skips the interactive UI, and --auto auto-confirms actions.&lt;br&gt;
After running your prompt, switch to Notion to confirm the changes. The new task(s) should appear in your database. You can update or query entries by running new prompts, e.g., “Find tasks due tomorrow” or “Mark task X as done.” Continue handles the details via the Notion API.&lt;/p&gt;

&lt;p&gt;Congratulations, you can now seamlessly manage your project tasks in Notion through Continue’s CLI, all from within VS Code.&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s Next?
&lt;/h2&gt;

&lt;p&gt;At the end of the day, the value of this integration isn’t novelty; it’s alignment. Your thoughts start in the editor, your work lives in Notion, and Continue quietly connects the two without demanding extra effort. By capturing notes and tasks as they’re created, you remove the most significant source of workflow failure: delay. The result is fewer missed follow-ups, clearer ownership, and a system that works with how developers already think and write.&lt;/p&gt;

&lt;p&gt;If you want to go further, Continue supports a wide range of example prompts and workflows you can adapt to your own setup, from task automation to documentation and planning flows. I highly recommend exploring those examples to see what’s possible and to refine prompts that fit your day-to-day work. You can find them here.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>documentation</category>
      <category>ai</category>
    </item>
    <item>
      <title>Automating the Boring Parts of Development Using Continuous AI</title>
      <dc:creator>Anita Ihuman  🌼</dc:creator>
      <pubDate>Thu, 29 Jan 2026 16:16:47 +0000</pubDate>
      <link>https://dev.to/anita_ihuman/automating-the-boring-parts-of-development-using-continuous-ai-3aep</link>
      <guid>https://dev.to/anita_ihuman/automating-the-boring-parts-of-development-using-continuous-ai-3aep</guid>
      <description>&lt;p&gt;Every team has that one small, annoying chore no one talks about. It doesn’t break anything. It doesn’t threaten deadlines. It just sits there, quietly stealing minutes, breaking flow, and reminding developers that even simple work can feel messy.&lt;/p&gt;

&lt;p&gt;For Maya’s team, that chore was cleaning up log files.&lt;br&gt;
Every morning, someone had to delete yesterday’s debug logs, archive the important ones, and reset the folder so their tooling didn’t choke on old data. It took three to five minutes. Not a catastrophe, but when multiplied across a week and across a team, it became a tiny tax on everyone’s routine.&lt;/p&gt;

&lt;p&gt;One Wednesday morning, after repeating the same set of clicks for the third day in a row, Maya muttered, “There has to be a better way to automate this process and save me time.” &lt;/p&gt;
&lt;h3&gt;
  
  
  The Idea that Snowballed
&lt;/h3&gt;

&lt;p&gt;Later that day, Maya opened her editor, determined to automate the cleanup. She was going to build a helper script that the team could drop into &lt;code&gt;/scripts&lt;/code&gt; and forget about.&lt;br&gt;
She knew exactly what she wanted:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Delete all &lt;code&gt;*.log&lt;/code&gt;files older than 24 hours&lt;/li&gt;
&lt;li&gt;Keep files tagged as important&lt;/li&gt;
&lt;li&gt;Compress the rest into a simple archive&lt;/li&gt;
&lt;li&gt;Run silently unless something fails&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This was the kind of task everyone always says they’ll automate someday but never do. However, this time, Maya decided to use her Continuous AI to accomplish these tasks.&lt;/p&gt;
&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;To follow along practically with Maya’s approach, ensure your environment is set up as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://docs.continue.dev/" rel="noopener noreferrer"&gt;Continue extension&lt;/a&gt; installed in your VS Code&lt;/li&gt;
&lt;li&gt;A local or cloud LLM that can handle &lt;a href="https://docs.continue.dev/agents/intro" rel="noopener noreferrer"&gt;agent mode&lt;/a&gt; loaded in your Continue Extension. Check out how to load an &lt;a href="https://dev.to/anita_ihuman/best-offline-ai-coding-assistant-how-to-run-llms-locally-without-internet-2bah"&gt;LLM locally&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Knowledge of JavaScript and NodeJS&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you don’t have these tools configured yet the necessary  or knowledge, that’s fine; you can still follow Maya’s story, which focuses more on the underlying concepts of automation and collaboration than the setup details.&lt;/p&gt;
&lt;h2&gt;
  
  
  Implementing the Solution with Continue.dev
&lt;/h2&gt;
&lt;h4&gt;
  
  
  Step 1: Initializing the Environment
&lt;/h4&gt;

&lt;p&gt;Maya creates a repository and initializes a node project in it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm init --y
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then she creates a file named cleanup.js inside her new repository. This is where Maya’s script will be written.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 2: Utilizing Continue Agent Mode
&lt;/h4&gt;

&lt;p&gt;She hit the Continue.dev extension, picked a model, selected agent mode, and typed:&lt;br&gt;
&lt;code&gt;“Generate a small Node.js script in cleanup.js that cleans the logs, tmp, and .cache folders safely. Add dry-run support and meaningful console output.”&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbkcsvb6yzsgqy6o90gw3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbkcsvb6yzsgqy6o90gw3.png" alt=" " width="800" height="503"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Step 3: The Generated Logic
&lt;/h4&gt;

&lt;p&gt;Continue responded instantly with a scaffold Maya would normally spend ten minutes writing. Then it added safeguards she hadn’t thought of, such as checking whether the directory exists, handling unreadable files gracefully, and confirming write permissions before attempting deletion.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Script Logic: How it works
&lt;/h2&gt;

&lt;p&gt;The script generated by Continue is small, but it packs a lot of convenience into a few lines. This tiny cleanup tool does one job: remove common clutter folders from your project, things like temporary files, logs, and caches. It runs safely, gives clear feedback, and supports a dry-run mode so you can preview what would happen before deleting anything. Here is a quick explanation of the code:&lt;/p&gt;

&lt;p&gt;a. &lt;strong&gt;It defines what clutter to remove:&lt;/strong&gt; The script starts by listing the folders that usually accumulate junk:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const TARGETS = ["logs", "tmp", ".cache"];
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can add or remove directories at any time—this is the “cleanup list.”&lt;/p&gt;

&lt;p&gt;b. &lt;strong&gt;It checks if you want a dry run:&lt;/strong&gt; It looks for the --dry-run flag when you run the script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const isDryRun = process.argv.includes("--dry-run");
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;If &lt;code&gt;--dry-run&lt;/code&gt;is present, nothing gets deleted.&lt;/li&gt;
&lt;li&gt;Instead, it prints what would have been removed.
This makes the script safer and beginner-friendly.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;c. &lt;strong&gt;It tries to remove each directory safely:&lt;/strong&gt; All deletion logic lives in one helper function:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function removeDirectory(dirPath) {
  if (!fs.existsSync(dirPath)) {
    console.log(`⏭️  Skipped: ${dirPath} (not found)`);
    return;
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If a folder doesn’t exist, the script skips it gracefully without throwing any errors or crashes.&lt;br&gt;
d. &lt;strong&gt;It respects dry-run mode:&lt;/strong&gt; Before deleting anything, the script checks:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if (isDryRun) {
  console.log(`💡 DRY RUN: Would remove ${dirPath}`);
  return;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This ensures you're always aware of what will happen.&lt;br&gt;
e. &lt;strong&gt;It deletes the directory (only when safe)&lt;/strong&gt;: deletion uses Node’s modern removal function:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;fs.rmSync(dirPath, { recursive: true, force: true });
console.log(`🧹 Removed: ${dirPath}`);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;{ recursive: true }&lt;/code&gt; allows removing folders with content&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;{ force: true }&lt;/code&gt; avoids errors if files are locked or missing
Everything is handled cleanly with a single command.
f. &lt;strong&gt;Processes each target folder&lt;/strong&gt;: The main logic loops over every folder in TARGETS:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;TARGETS.forEach((target) =&amp;gt; {
  const fullPath = path.join(process.cwd(), target);
  removeDirectory(fullPath);
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;process.cwd()&lt;/code&gt; ensures it cleans your current project&lt;/li&gt;
&lt;li&gt;Each folder is resolved, then passed to the cleanup function
This keeps the script small but effective.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;g. &lt;strong&gt;Prints friendly, meaningful feedback:&lt;/strong&gt; Each run starts and ends with simple messages:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;console.log(isDryRun ? "🔍 Running in dry-run mode..." : "🧼 Running cleanup...");
console.log("✨ Done!");
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Documenting the tool
&lt;/h3&gt;

&lt;p&gt;The script worked perfectly, but Maya wanted the whole team to use it easily. She leveraged Continue's context-awareness to generate a README.md instantly:&lt;br&gt;
&lt;code&gt;“Generate a short README explaining how to run this script.”&lt;/code&gt;&lt;br&gt;
Again, the agent produced a clean, copy-ready document with usage examples, requirements, warnings, and optional flags she could add later.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frfgv0gg6fpaivvkfkixy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frfgv0gg6fpaivvkfkixy.png" alt=" " width="800" height="389"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Maya pushed the update and shared the repository link in the team chat. &lt;br&gt;
This script was a small win, which didn’t change the project architecture or optimize performance but removed friction and benefited everyone.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why These Tiny Wins Matter
&lt;/h3&gt;

&lt;p&gt;In reality, not every improvement has to be a grand rewrite or a flashy migration. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Developers spend far less time coding than assumed:&lt;/strong&gt; Multiple industry studies show that only &lt;a href="https://stripe.com/newsroom/stories/developer-coefficient" rel="noopener noreferrer"&gt;20–30% of a developer’s day&lt;/a&gt; is spent writing new code; the majority goes to reviews, tooling friction, debugging, documentation, and coordination.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Interruptions are disproportionately expensive:&lt;/strong&gt; Research summarized by Atlassian and Microsoft shows that interruptions &lt;a href="https://www.atlassian.com/blog/productivity/context-switching" rel="noopener noreferrer"&gt;can double the time&lt;/a&gt; it takes developers to complete tasks and increase defect rates. Even the smallest recurring tasks can impose mental overhead, context switching, and re-orienting, which directly &lt;a href="https://eu-opensci.org/index.php/ejedu/article/view/30799" rel="noopener noreferrer"&gt;reduces productivity&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Small time savings compound quickly:&lt;/strong&gt; A 5-minute daily task equals ~2 hours per developer per month. Hence, removing short, recurring tasks &lt;a href="https://blog.continue.dev/continue-cloud-agents-automate-dev-tasks/" rel="noopener noreferrer"&gt;quietly recovers hours of focus&lt;/a&gt; time per engineer over a sprint.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Lowering the automation barrier changes team culture:&lt;/strong&gt; When automation is easy, developers stop tolerating friction and start fixing it. &lt;a href="https://docs.continue.dev" rel="noopener noreferrer"&gt;Continuous AI&lt;/a&gt; significantly reduces developer effort, enabling teams to automate by default.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  In Summary
&lt;/h2&gt;

&lt;p&gt;While this guide is written using a developer persona to illustrate the workflow, every step described reflects how I personally use Continue to automate repetitive development tasks and improve productivity.&lt;/p&gt;

&lt;p&gt;A script that takes a couple of minutes to put together can save hours over time. That’s often how automation starts, small, practical helpers that quietly reduce friction. The more you practice noticing these moments and turning them into scripts, the sharper that skill becomes. Tools like &lt;a href="https://www.continue.dev/" rel="noopener noreferrer"&gt;Continue &lt;/a&gt;fit naturally into that workflow, helping you create and share as you go.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>ai</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Better Docs, Less Effort: Using The Continue MCP Cookbook</title>
      <dc:creator>Anita Ihuman  🌼</dc:creator>
      <pubDate>Mon, 12 Jan 2026 11:19:27 +0000</pubDate>
      <link>https://dev.to/anita_ihuman/better-docs-less-effort-using-the-continue-mcp-cookbook-2e60</link>
      <guid>https://dev.to/anita_ihuman/better-docs-less-effort-using-the-continue-mcp-cookbook-2e60</guid>
      <description>&lt;p&gt;Smarter Docs, Less Effort: The Continue MCP Cookbook Approach&lt;br&gt;
Documentation work rarely fails because teams don’t know what to write. It slows down because of how it’s written and updated.&lt;/p&gt;

&lt;p&gt;For technical writers, documentation leads, and developers alike, documentation often becomes an exercise in repetition: copying old guides, aligning structures, fixing formatting drift, and reconciling style inconsistencies across multiple docs. None of this work improves clarity for users, yet it consumes a disproportionate amount of time.&lt;/p&gt;

&lt;p&gt;Similar to writing code, documentation often feels like a friction point among teams, especially when different developers work on the project. Common mistakes like slightly different structures, missing steps, and formatting that never quite match are bound to occur. &lt;/p&gt;

&lt;p&gt;The Continue Docs MCP (Model Context Protocol) Cookbook changes this dynamic. By leveraging AI that is natively aware of your documentation’s structure and style, you can shift from manually improving your documentation.&lt;/p&gt;
&lt;h1&gt;
  
  
  The Problem: A Manual, Disconnected Workflow
&lt;/h1&gt;

&lt;p&gt;Up close, most traditional documentation processes looked predictably frustrating and writers had to deal with:&lt;/p&gt;
&lt;h4&gt;
  
  
  Rebuilding the same structure each time:
&lt;/h4&gt;

&lt;p&gt;Most technical guides follow predictable patterns (What you’ll build → prerequisites →  setup → step-by-step workflow → examples → troubleshooting). Yet each new guide often starts from a blank page. Writers recreate the same headings, rewrite similar explanations, and manually ensure alignment with previous documents. Over time, even the tiniest inconsistencies in these guides compound into a fragmented documentation experience.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9yjkr0922yhymbk31jwm.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9yjkr0922yhymbk31jwm.jpeg" alt=" " width="800" height="527"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Switcing between too many tools.
&lt;/h4&gt;

&lt;p&gt;Documentation work spanned multiple tools. A code editor for examples, a browser tab for the existing docs, another for CLI references, and another still for Markdown conventions. The challenge is not writing itself but maintaining context across tools that don’t communicate with each other.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv1t6l6gnavd8szu4uiyz.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv1t6l6gnavd8szu4uiyz.jpeg" alt=" " width="800" height="525"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  No shared style or prompt.
&lt;/h4&gt;

&lt;p&gt;Every contributor brought their own tone, their own interpretation of what a good guide should look like. Reviews became less about improving content and more about reconciling mismatched voices. Editing felt like piecing together a story written by several different authors who’d never met.&lt;/p&gt;

&lt;p&gt;Traditional product documentation is often bogged down by rigid workflows, forcing teams to spend extra hours manually correcting the structural and style inconsistencies that inevitably creep in.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F48i416m1gy0gamaoqi1u.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F48i416m1gy0gamaoqi1u.jpeg" alt=" " width="800" height="516"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Using The Continue Docs MCP Cookbook
&lt;/h2&gt;

&lt;p&gt;The Continue Docs MCP Cookbook introduces a different flow, an AI-powered Model Context Protocol (MCP) server based on Mintlify. It allows your AI assistant (via the Continue IDE extension) to read your existing documentation and use it as a live reference for everything it generates.&lt;/p&gt;

&lt;p&gt;Instead of a generic LLM guessing what your docs should look like, the MCP server provides the assistant with the exact patterns, Markdown formats, and organizational structures used in your specific repository.&lt;br&gt;
The promise is to help developers and technical writers easily update their project documentation. Instead of rebuilding the same frameworks repeatedly, they can ask the system to apply known patterns automatically.&lt;/p&gt;
&lt;h2&gt;
  
  
  Integration: Bringing It Into the Workflow
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Prerequisites Setup
&lt;/h3&gt;

&lt;p&gt;To practically follow this guide, you would need the &lt;a href="https://docs.continue.dev/guides/continue-docs-mcp-cookbook#%E2%9A%A1-quick-start-recommended:~:text=Before%20starting%2C%20ensure,Continue%20CLI%20installed" rel="noopener noreferrer"&gt;following setup&lt;/a&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Continue account with Hub access&lt;/li&gt;
&lt;li&gt;Must have read: &lt;a href="https://docs.continue.dev/CONTRIBUTING" rel="noopener noreferrer"&gt;Contributing to Continue Documentation&lt;/a&gt; for setup instructions (The rest of the article assumes you have read this, so start here first)&lt;/li&gt;
&lt;li&gt;Forked the &lt;a href="https://github.com/continuedev/continue" rel="noopener noreferrer"&gt;continuedev/continue&lt;/a&gt; repository&lt;/li&gt;
&lt;li&gt;Node.js 20+ and &lt;a href="https://docs.continue.dev/cli/overview" rel="noopener noreferrer"&gt;Continue CLI&lt;/a&gt; installed&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Creating a new Cookbook
&lt;/h3&gt;

&lt;p&gt;Every cookbook in the Continue repository starts as a plain Markdown or MDX file in &lt;code&gt;docs/guides/&lt;/code&gt;. To add a new cookbook, open that folder and create a new Markdown file inside (use lowercase and hyphens).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
deployment-cookbook.mdx

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the required YAML formatter. E.g.:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
title: "Your Cookbook Title"
description: "A short description of what the guide covers."
---
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is how &lt;a href="https://www.mintlify.com/" rel="noopener noreferrer"&gt;Mintlify renders&lt;/a&gt; titles, previews, and metadata.&lt;/p&gt;

&lt;p&gt;You can write your guide using the standard cookbook pattern:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* What You’ll Build
* Prerequisites
* Setup
* Steps or Workflow
* Example Commands / Code
* Troubleshooting
* Summary
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The moment you save that file, Continue’s MCP agent can read it.&lt;br&gt;
Now you can let the MCP agent automatically use your cookbook. Here is where the magic happens. Once a cookbook exists, Continue can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;pull its structure on demand&lt;/li&gt;
&lt;li&gt;reuse its headings&lt;/li&gt;
&lt;li&gt;mirror its tone&lt;/li&gt;
&lt;li&gt;auto-fill its boilerplate&lt;/li&gt;
&lt;li&gt;extract code patterns or examples&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So instead of rebuilding a template, you start the agent:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
# From your Continue docs directory

cn --config continuedev/docs-mintlify
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Type in the prompt:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;“Generate a new guide using the same structure as deployment-cookbook.mdx.”
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Instantly, you get a fully structured skeleton - consistent, clean, aligned with your style guide.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvr3tajnlni89rxm0umka.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvr3tajnlni89rxm0umka.png" alt=" " width="800" height="249"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can check out the &lt;a href="https://docs.continue.dev/guides/continue-docs-mcp-cookbook#documentation-workflows-with-prompts" rel="noopener noreferrer"&gt;Continue guides documentation&lt;/a&gt; for other &lt;a href="https://docs.continue.dev/guides/continue-docs-mcp-cookbook#documentation-workflows-with-prompts" rel="noopener noreferrer"&gt;documentation workflows&lt;/a&gt; with prompts, such as researching and comparing existing docs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Results: What Changes in Practice
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Faster, consistent documentation:&lt;/strong&gt; This cookbook enforces structure automatically. The time that once went to rebuilding templates can be used in writting and improving the actual context. The Continue blog captured it best: “&lt;a href="https://blog.continue.dev/from-clever-solutions-to-user-centered-ai-a-documentation-journey/#:~:text=The%20AI%20approach%20is%20technically%20simpler%20but%20dramatically%20more%20effective%20because%20it%20works%20with%20how%20humans%20actually%20think%20and%20create." rel="noopener noreferrer"&gt;technically simpler but dramatically more effective&lt;/a&gt;.”&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lighter reviews:&lt;/strong&gt; Because the AI handles the boilerplate and formatting, Doc Leads can focus their reviews on technical accuracy and high-level clarity rather than fixing broken Markdown links.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No more context switching:&lt;/strong&gt; The MCP agent consolidates document retrieval, code sourcing, and pattern insertion into a single conversational workflow, eliminating the friction of switching between disconnected tools.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lighter onboarding:&lt;/strong&gt; New contributors can bypass dense style guides by focusing on technical intent. The agent automatically applies the correct structural and stylistic standards based on simple prompts.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Higher-quality Assurance:&lt;/strong&gt; The system enforces consistent sectioning and updates references automatically, ensuring all output aligns with project standards without the need for manual policing during review.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Real time saved:&lt;/strong&gt; The initial prompt already delivered half the content skeleton. After that, the work feels more like shaping than assembling.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Takeaways for Any Developer
&lt;/h2&gt;

&lt;p&gt;The Continue Docs MCP Cookbook isn't a replacement for the technical writing aspect of documentation. Instead, it replaces the manual labor and repetitive overhead that slows documentation down while ensuring consistency.&lt;br&gt;
Your technical expertise that answers the "why" behind a feature remains the most important element. Think of the MCP as the "customizable rules" and you, the writer, as the "navigator." The system supplies the structural guardrails, while you provide the creative spark and technical nuance.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Next?
&lt;/h2&gt;

&lt;p&gt;The Continue Docs MCP Cookbook streamlines the documentation process and helps maintain consistency across your documents. By handling repetitive structure and pulling in consistent examples, the MCP shifts the work from rebuilding templates to focusing on clear, meaningful explanations.&lt;/p&gt;

&lt;p&gt;Also, if you’ve ever wanted to &lt;a href="https://docs.continue.dev/CONTRIBUTING" rel="noopener noreferrer"&gt;contribute to Continue’s docs&lt;/a&gt;, now’s the easiest time to start. With the MCP Cookbook doing the heavy lifting, all you need is your perspective and a prompt. Your perspective, your examples, and your improvements can directly shape the ecosystem, so contribute, experiment, and help make the Continue docs better for everyone&lt;/p&gt;

</description>
      <category>ai</category>
      <category>documentation</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Using GitHub MCP With Continue to Review PRs and Issues 5 Faster</title>
      <dc:creator>Anita Ihuman  🌼</dc:creator>
      <pubDate>Fri, 28 Nov 2025 08:08:40 +0000</pubDate>
      <link>https://dev.to/anita_ihuman/using-github-mcp-with-continue-to-review-prs-and-issues-5x-faster-1p8a</link>
      <guid>https://dev.to/anita_ihuman/using-github-mcp-with-continue-to-review-prs-and-issues-5x-faster-1p8a</guid>
      <description>&lt;p&gt;As developers embrace sophisticated AI assistants like Continuous AI, the barrier between context, collaboration, and code generation is dissolving. But how do you bridge the gap between an intelligent AI in your IDE and the complex, living ecosystem of your codebase, which often resides on GitHub?&lt;/p&gt;

&lt;p&gt;In private repositories, you might ship a new feature in minutes, but you still have to write a detailed PR description, link related issues, and hope a busy maintainer gets to it soon. In open source, the challenge is even heavier. Maintainers are flooded with pull requests, each requiring mental context switching, security scrutiny, and style-guide checks—all of which &lt;a href="https://www.cnbc.com/2025/09/20/working-smarter-not-harder-how-ai-could-help-fight-burnout-.html" rel="noopener noreferrer"&gt;contribute to the burnout&lt;/a&gt; that many in the community report. However, what if you could use your AI assistant to understand the entire conversation happening on your GitHub repository from your IDE? &lt;/p&gt;

&lt;p&gt;In this article, we are going to integrate Continuous AI directly with GitHub using the Model Context Protocol (MCP) to intelligently triage issues, suggest PR fixes, and understand your project's history from VS Code.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the GitHub MCP Does (and Why It Matters)
&lt;/h2&gt;

&lt;p&gt;The Model Context Protocol (MCP) is an open standard that allows AI tools to interact securely and contextually with external systems. Since GitHub introduced its &lt;a href="https://github.blog/ai-and-ml/generative-ai/a-practical-guide-on-how-to-use-the-github-mcp-server/" rel="noopener noreferrer"&gt;MCP server&lt;/a&gt;, it quietly changed how AI tools could interact with repositories. It’s like giving your AI assistant access to now understand your repo’s context, read issues and pull requests, and even help you generate responses or code suggestions with full context instead of generic guesses.&lt;/p&gt;

&lt;p&gt;When you &lt;a href="https://docs.continue.dev/guides/github-mcp-continue-cookbook#%E2%9A%A1-quick-start-recommended" rel="noopener noreferrer"&gt;integrate this MCP&lt;/a&gt; with an assistant coding tool like Continuous AI, your IDE gains awareness of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The current branch, commits, and recent pull requests&lt;/li&gt;
&lt;li&gt;Open issues with labels, authors, and comments&lt;/li&gt;
&lt;li&gt;Code diffs, line-by-line changes, and linked discussions&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Prerequisite:
&lt;/h3&gt;

&lt;p&gt;Before you dive in, make sure you have the &lt;a href="https://docs.continue.dev/guides/github-mcp-continue-cookbook#" rel="noopener noreferrer"&gt;following setup&lt;/a&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install the extension. Ensure you have the &lt;a href="https://docs.continue.dev/" rel="noopener noreferrer"&gt;Continue extension&lt;/a&gt; installed in your preferred IDE (VS Code or JetBrains).&lt;/li&gt;
&lt;li&gt;Install Tools &amp;amp; Authenticate: Install the Continue CLI (&lt;code&gt;npm i -g @continuedev/cli&lt;/code&gt;) and the GitHub CLI (gh). Authenticate the GitHub CLI by running &lt;code&gt;gh auth login&lt;/code&gt; in your terminal.&lt;/li&gt;
&lt;li&gt;A GitHub account and a repository to work with
Generate GitHub Token: Create a GitHub Personal Access Token (PAT) with the necessary scopes (at least repo:read).&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 1: Connecting Continue.dev to GitHub
&lt;/h2&gt;

&lt;p&gt;To make your Continue assistant talk to GitHub repository, you need to register the GitHub MCP server. This process is a one-time setup.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Install GitHub MCP via Continue Mission Control:&lt;/strong&gt; You can install the pre-built GitHub agent by going to the &lt;a href="https://hub.continue.dev/continuedev/github-project-manager-agent" rel="noopener noreferrer"&gt;GitHub Project Manager&lt;/a&gt; Agent page within the Continue Mission Control interface and clicking the Install button. The listing provides a pre-configured MCP block to your agent.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Set the Secret:&lt;/strong&gt; Head over to the settings in the Continue Hub and&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provide your GITHUB_TOKEN to Continue Mission Control, set it in your shell environment. &lt;/li&gt;
&lt;li&gt;You will also need to provide an Anthropic API key. So navigate to the &lt;a href="https://console.anthropic.com/settings/billing" rel="noopener noreferrer"&gt;console and create&lt;/a&gt; one (subscription is required). 
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ja2f5w892on83tp23d3.png" alt=" " width="800" height="281"&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Select Your Config&lt;/strong&gt;: Head over to your IDE to the Continue panel and click the config selector in your IDE’s Continue panel. You can create a new configuration file in your project's directory.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6hkftbk6ahdklvqf0hmw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6hkftbk6ahdklvqf0hmw.png" alt=" " width="800" height="442"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Manual Config (e.g., in &lt;code&gt;.continue/config.yaml&lt;/code&gt;): You need to define a custom agent that includes the GitHub MCP as a tool that will handle the underlying connection. Use the following sample code and update it with your API keys and token.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# This is an example configuration file
# To learn more, see the full config.yaml reference: https://docs.continue.dev/reference
name: Example Config
version: 1.0.0
schema: v1
# Define which models can be used
# https://docs.continue.dev/customization/models
models:
 - name: my gpt-5
   provider: openai
   model: gpt-5
   apiKey: YOUR_OPENAI_API_KEY_HERE
 - uses: ollama/qwen2.5-coder-7b
 - uses: anthropic/claude-4-sonnet
   with:
     ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
# MCP Servers that Continue can access
# https://docs.continue.dev/customization/mcp-tools
mcpServers:
 - uses: anthropic/memory-mcp
    with:
     GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
     # OPTIONAL: You may need a host if using GitHub Enterprise
     # GITHUB_HOST: ${{ secrets.GITHUB_HOST }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;4.&lt;strong&gt;Sync and Code:&lt;/strong&gt; Click “&lt;strong&gt;Reload config&lt;/strong&gt;” to pull your latest settings&lt;br&gt;
5.&lt;strong&gt;Launch the Agent&lt;/strong&gt;: From your repo root, in the terminal, run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cn --agent continuedev/github-project-manager-agent
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, when you enter a prompt like this: “List my open issues labeled bug and summarize priorities,” you should be able to see the list of available issues in the repository, along with a summary of the issue.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjjn10ietkj1fmr387og1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjjn10ietkj1fmr387og1.png" alt=" " width="800" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: The Developer Flow - Creating Issues and Summaries
&lt;/h2&gt;

&lt;p&gt;As a developer, you can use the integrated agent to handle post-coding administrative tasks without leaving your code editor.&lt;/p&gt;

&lt;h3&gt;
  
  
  Use Case 1: Creating a Post-Development Issue
&lt;/h3&gt;

&lt;p&gt;You finished a feature, but you noticed a low-priority item that needs to be addressed later (e.g., "The API response mapping needs type enforcement").&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Prompt in Continue Chat:&lt;/strong&gt;
"Create a new GitHub issue titled &lt;em&gt;'Refactor: Add type enforcement to /api/v1/user response mapping.'&lt;/em&gt; Assign it the label 'enhancement' and add a brief description explaining that the current implementation is brittle."&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg2enny9fa8yopqdd8nxy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg2enny9fa8yopqdd8nxy.png" alt=" " width="800" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Agent Action:&lt;/strong&gt; The agent uses the GitHub MCP's create_issue tool. It prompts you for approval, then executes the action, returning the link to the new issue: 
“Successfully created Issue #3: Refactor: Add type enforcement... (&lt;a href="https://github.com/Anita-ihuman/Woo-Landing-Template-/issues/3" rel="noopener noreferrer"&gt;https://github.com/Anita-ihuman/Woo-Landing-Template-/issues/3&lt;/a&gt; )”&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Use Case 2: Summarizing Code for the Commit Message/PR Description
&lt;/h3&gt;

&lt;p&gt;You’ve made a large refactor. You don't want to manually scan the diff for the commit message.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Prompt in Continue Chat (after staging changes)&lt;/strong&gt;:
"Analyze all staged files. Summarize the changes in a concise, bulleted list that could serve as the body of my Pull Request description. Focus on the 'why' and the new architecture."&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fko4y75trm7qkm4zgg5uj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fko4y75trm7qkm4zgg5uj.png" alt=" " width="800" height="268"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Agent Action:&lt;/strong&gt; Although I do not have a PR or any recent updates to this repo. However, ideally, the agent will use the diff resource (from the local workspace context) and the GitHub MCP's file context for comparison. It then returns a clean summary ready for you to copy-paste into your git commit or PR template. This is a massive time-saver for detailed documentation.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 3: The Maintainer Flow - Reviewing PRs and Triaging Backlogs
&lt;/h2&gt;

&lt;p&gt;As a maintainer of an open source project, you can automate the initial screening and feedback loop for pull requests received to the project repository.&lt;/p&gt;

&lt;h3&gt;
  
  
  Use Case 3: Automated Initial PR Review
&lt;/h3&gt;

&lt;p&gt;For example, let's say a new contributor submits a PR. Before you read a single line of code, you will need a quick technical assessment.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Prompt in Continue Chat:&lt;/strong&gt;
"Review Pull Request #5 on the main branch. Check for obvious security flaws (e.g., using eval() or un-sanitized input) and summarize if it adheres to our project's max-line-length rule. Post the summary as a single comment on the PR."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Agent Action:&lt;/strong&gt;

&lt;ol&gt;
&lt;li&gt;The agent uses the GitHub MCP's get_pull_request_diff tool to pull the content.&lt;/li&gt;
&lt;li&gt;It applies specialized Rules (custom functions defined in Continue) to scan the code for the requested patterns.&lt;/li&gt;
&lt;li&gt;It uses the GitHub MCP's add_comment tool to post the findings, and you should get something like this: “ AI Review: No immediate security flaws detected. However, three files exceed the 80-character line limit. Please address these before human review.”&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Use Case 4: Mass Backlog Triage
&lt;/h3&gt;

&lt;p&gt;If you need to clean up a stale issue backlog by prioritizing or closing old items, you can also use the GitHub MCP we configured to handle this.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Prompt in Continue Chat&lt;/strong&gt;:
"List all open issues older than 90 days with no activity in the last 30 days. For each, draft a comment asking, 'Is this still a relevant issue? If no response in 7 days, we will close it.' Only draft the comments, do not post them automatically."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Agent Action:&lt;/strong&gt; The agent will query GitHub MCP's get_issues tool with the specified filters (age, activity). It compiles a list and provides the draft comments for you to review and approve one by one before they are posted to GitHub.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why Does This Matter?
&lt;/h2&gt;

&lt;p&gt;According to &lt;a href="https://github.blog/news-insights/research/survey-ai-wave-grows/" rel="noopener noreferrer"&gt;GitHub’s 2024 Octoverse Report&lt;/a&gt;, developers spend nearly 40% of their coding time navigating, reviewing, or refactoring existing code. Tools like Continue use these MCPs to reduce that friction by merging context, intent, and automation.&lt;/p&gt;

&lt;p&gt;For open source maintainers, this means fewer cognitive jumps between contexts, and for developers, it means faster reviews and higher-quality contributions. And for teams, it’s a subtle but meaningful shift toward flow-based collaboration, where context lives where you work, not in the dozen browser tabs you left open.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Best Offline AI Coding Assistant: How to Run LLMs Locally Without Internet</title>
      <dc:creator>Anita Ihuman  🌼</dc:creator>
      <pubDate>Tue, 25 Nov 2025 13:31:37 +0000</pubDate>
      <link>https://dev.to/anita_ihuman/best-offline-ai-coding-assistant-how-to-run-llms-locally-without-internet-2bah</link>
      <guid>https://dev.to/anita_ihuman/best-offline-ai-coding-assistant-how-to-run-llms-locally-without-internet-2bah</guid>
      <description>&lt;p&gt;As developers, we’ve all felt that shift, AI coding assistants quietly becoming part of our workflow, helping us write cleaner code, understand new codebases faster, and move from idea to implementation with less friction. They’ve gone from “nice to have” to tools we rely on daily. But the moment the internet drops, everything stops. &lt;/p&gt;

&lt;p&gt;This led me to explore options that I can use when I have access to the internet and even when I am offline. It turns out that the &lt;a href="https://docs.continue.dev/customize/model-roles/chat#local-offline-experience" rel="noopener noreferrer"&gt;Continuous AI extension&lt;/a&gt; also supports this capability. Instead of depending on the cloud, it lets me load local models downloaded with tools like Ollama, LM Studio, or Hugging Face and use them directly inside VS Code. Now I can access my coding assistant offline, with features like chat, autocomplete, and agents available without interruptions.&lt;/p&gt;

&lt;h2&gt;
  
  
  How does it work?
&lt;/h2&gt;

&lt;p&gt;Offline coding assistants like Continue.dev let you run large language models directly on your machine, giving you cloud-level intelligence without an internet connection. Instead of sending your code and prompts to a remote API, they pull the models to your device and handle all inference locally. Here’s how the process works under the hood:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Download the model locally&lt;/strong&gt; — The assistant connects to a model source such as Hugging Face or Ollama, then downloads a compatible model (often quantized) to your machine.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Load the model into a local runtime&lt;/strong&gt; — It uses an inference engine such as GGML, llama.cpp, or GPU-accelerated backends to load the model into your RAM or VRAM.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Run all inference on-device&lt;/strong&gt; — When you type a prompt, Continue.dev sends it to the local runtime instead of the cloud. Your CPU or GPU handles all the computation needed to generate the following tokens.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stream results back to your editor&lt;/strong&gt; — The model’s responses are streamed directly to Continue.dev inside your IDE, with no network delay or external data transfer.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stay fully offline&lt;/strong&gt; — Because everything happens on your system, from prompt to output, you can keep coding even when offline, with full privacy and consistent performance.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Downloading LLMs with Ollama
&lt;/h2&gt;

&lt;p&gt;While Continue.dev lets you use local LLMs directly inside your editor, tools like Ollama are where you actually download and run the models themselves. Ollama is an open-source platform for running large language models locally.&lt;/p&gt;

&lt;h3&gt;
  
  
  Install Ollama
&lt;/h3&gt;

&lt;p&gt;For Windows and Mac, go to the &lt;a href="https://ollama.com/download" rel="noopener noreferrer"&gt;Ollama download page&lt;/a&gt;, click the Download button to start your download. Once the download is finished, click on the file and click the Install button.&lt;br&gt;
For Linux, run this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -fsSL https://ollama.com/install.sh | sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After installation, the Ollama GUI should open.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqtv303elg59v3llbv4bm.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqtv303elg59v3llbv4bm.jpeg" alt=" " width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Running an LLM
&lt;/h3&gt;

&lt;p&gt;You can download and run a wide variety of LLMs with Ollama. To run an LLM, use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ollama run "model name"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ollama run deepseek-r1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When the download is complete, your LLM should be running, and you can interact with it directly on your terminal.&lt;br&gt;
You can also open the GUI to confirm your downloaded model in the dropdown and interact with it directly without the need for an internet connection.&lt;/p&gt;

&lt;p&gt;Check out the &lt;a href="https://github.com/ollama/ollama" rel="noopener noreferrer"&gt;Ollama repository&lt;/a&gt; for more models, information, and commands.&lt;/p&gt;
&lt;h3&gt;
  
  
  Use your LLM on VSCode with Continue.dev
&lt;/h3&gt;

&lt;p&gt;Install Continue Extension&lt;br&gt;
Now open your VSCode, go to the extensions tab, and type “Continue” in the search bar. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffcw5mcwi6u0tmegzorx7.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffcw5mcwi6u0tmegzorx7.jpeg" alt=" " width="695" height="1020"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click Install. After installing, the Continue extension icon should appear. Click it to open the extension window.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flynngyonqq6mbzhvold1.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flynngyonqq6mbzhvold1.gif" alt=" " width="574" height="304"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Pull/Download a Model
&lt;/h3&gt;

&lt;p&gt;For this article, we are using two lightweight Models recommended by Continue.dev that support chat and autocomplete modes. Run the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ollama run qwen2.5-coder:3b
ollama run qwen2.5-coder:1.5b
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Run Configs
&lt;/h3&gt;

&lt;p&gt;In your Continue extension window, click the settings icon and navigate to Configs, and click Local Config.&lt;/p&gt;

&lt;p&gt;A config.yaml file will show up. Populate it with this configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Local Assistant
version: 1.0.0
schema: v1

models:
 - name: Qwen2.5-Coder 3B
   provider: ollama
   model: qwen2.5-coder:3b
   roles:
     - chat
     - edit
     - apply

 - name: Qwen2.5-Coder 1.5B (Autocomplete)
   provider: ollama
   model: qwen2.5-coder:1.5b
   roles:
     - autocomplete

 - name: Autodetect
   provider: ollama
   model: AUTODETECT

context:
 - provider: code
 - provider: docs
 - provider: diff
 - provider: terminal
 - provider: problems
 - provider: folder
 - provider: codebase
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These configurations define which models can be used and how. Check out the &lt;a href="https://docs.continue.dev/reference" rel="noopener noreferrer"&gt;Continue reference docs&lt;/a&gt; to learn more about configurations.&lt;/p&gt;

&lt;p&gt;Voila! You are done. You can now interact with your local LLM directly in your code editor without the need for the Internet.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frl4jq7t7q9sews7axxtd.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frl4jq7t7q9sews7axxtd.jpeg" alt=" " width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Advantages of using Local LLMs
&lt;/h2&gt;

&lt;p&gt;Aside from not needing internet to run, tools like the Continue extension come with more advantages for developers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Privacy and data control&lt;/strong&gt; — With Continue.dev running models locally, your code, logs, and prompts never leave your machine. It communicates directly with your on-device model runtime, so nothing is sent to external servers. This gives you full control over sensitive repositories, internal APIs, and proprietary logic without cloud-related security risks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Customizability&lt;/strong&gt; — Continue.dev lets you choose exactly which local model you want to run: Llama, Mistral, Codellama, Qwen, and more. You can swap models per project, adjust context lengths, tweak inference settings, or even fine-tune your own model and point Continue.dev to it. It adapts to your hardware and workflow instead of locking you into a single provider.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Predictable performance&lt;/strong&gt; — Because Continue.dev operates through a local inference engine, your response time depends only on your machine, not on server load or network stability. There’s no throttling, rate-limiting, or unpredictable latency. Whether you’re refactoring a file or generating new components, the speed stays consistent.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost efficiency&lt;/strong&gt; — Once you download the model and set it up with Continue.dev, you avoid ongoing API charges entirely. There are no per-token costs, monthly bills, or usage caps. You get unlimited inference, unlimited tokens, and full autonomy without burning through credits or subscriptions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In essence, local LLMs with the use of the Continue extension offer a level of control, reliability, and security directly into your code editor. Whether you’re dealing with unstable or no internet, sensitive code, or the need for faster, more consistent performance, running models on your own machine gives you an edge.&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s Next?
&lt;/h2&gt;

&lt;p&gt;The future of offline coding assistance is here. It continues to evolve, with tools like Continue and Ollama that let developers run large language models locally in their environments without relying on the internet.&lt;br&gt;
With tools like Continue, setting up local LLM support in your code editor is now easier than ever. So play around with it, run more models, sharpen your skills, and enjoy the joy of AI coding assistance even without an internet connection.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>programming</category>
      <category>productivity</category>
    </item>
    <item>
      <title>How to Prompt AI Coding Assistants for 90% Accurate Code: The Ultimate Developer Guide</title>
      <dc:creator>Anita Ihuman  🌼</dc:creator>
      <pubDate>Fri, 14 Nov 2025 13:50:25 +0000</pubDate>
      <link>https://dev.to/anita_ihuman/the-ideal-way-to-prompt-your-ai-coding-assistant-for-90-accuracy-1f09</link>
      <guid>https://dev.to/anita_ihuman/the-ideal-way-to-prompt-your-ai-coding-assistant-for-90-accuracy-1f09</guid>
      <description>&lt;p&gt;I just needed to complete a simple task. Create a basic Express.js app, nothing fancy, just a small REST API scaffold for a project demo. So I did what any modern developer would:&lt;br&gt;
I opened &lt;a href="https://www.continue.dev/" rel="noopener noreferrer"&gt;Continue&lt;/a&gt;, my AI coding assistant, inside VS Code, and typed:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Create an Express app.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Seconds later, a few lines of boilerplate code appeared like magic:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const express = require('express');

const app = express();


// Start server
app.listen(4000, () =&amp;gt; {
  console.log('Server running on http://localhost:4000');
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flwqkxi2q3l383ypfegh4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flwqkxi2q3l383ypfegh4.png" alt=" " width="800" height="657"&gt;&lt;/a&gt;&lt;br&gt;
Everything looked familiar until I ran it.&lt;br&gt;
Then Boom, Error!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Firegjirxfxhtm84mvr3v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Firegjirxfxhtm84mvr3v.png" alt=" " width="800" height="666"&gt;&lt;/a&gt;&lt;br&gt;
I’d forgotten that I was using ES modules, not CommonJS. The AI didn’t know that.&lt;br&gt;
I hadn’t told it.&lt;/p&gt;

&lt;p&gt;Although it was a subtle oversight, in that moment, I realized something important: the accuracy of my coding assistant had nothing to do with its intelligence.&lt;br&gt;
It had everything to do with how I talked to it.&lt;/p&gt;
&lt;h1&gt;
  
  
  When AI “Understands” without Understanding
&lt;/h1&gt;

&lt;p&gt;Every developer who uses AI coding assistants has seen this happen: a technically incorrect yet perfectly confident answer.&lt;br&gt;
That is &lt;strong&gt;&lt;a href="https://www.ibm.com/think/topics/ai-hallucinations" rel="noopener noreferrer"&gt;AI hallucination:&lt;/a&gt;&lt;/strong&gt; when an AI model produces incorrect, nonsensical, or fabricated information confidently, as if it were true. This is because language models predict the most &lt;a href="https://www.sciencedirect.com/science/article/pii/S1364661323002024?utm_source=chatgpt.com" rel="noopener noreferrer"&gt;statistically likely next word&lt;/a&gt; based on data &lt;a href="https://www.ibm.com/think/topics/ai-reasoning" rel="noopener noreferrer"&gt;it has been fed&lt;/a&gt;, rather than truly understanding the information.&lt;/p&gt;

&lt;p&gt;So when you tell your assistant, “create an Express app”, it gives you what’s most common — not what fits your setup.&lt;br&gt;
No wonder it missed the ES module part. It didn't misunderstand me. It just filled in the blanks I left empty.&lt;/p&gt;
&lt;h1&gt;
  
  
  Turning Point - Writing Better Prompts
&lt;/h1&gt;

&lt;p&gt;After a few more “why won’t this run?” moments, I decided to treat Continue like a junior developer, not a search bar.&lt;br&gt;
Instead of one vague sentence, I gave it real instructions:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Create a simple Express.js app using ES modules. &lt;br&gt;
Use &lt;code&gt;import&lt;/code&gt; instead of &lt;code&gt;require&lt;/code&gt;. &lt;br&gt;
Add routes for &lt;code&gt;/&lt;/code&gt; and &lt;code&gt;/health&lt;/code&gt;, and include a middleware that logs the request method and URL. &lt;br&gt;
Use port 4000.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Continuous AI generated this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import express from 'express';

const app = express();

// Middleware
app.use((req, res, next) =&amp;gt; {
  console.log(`${req.method} ${req.url}`);
  next();
});

// Routes
app.get('/', (req, res) =&amp;gt; res.send('Hello from Continue.dev!'));
app.get('/health', (req, res) =&amp;gt; res.json({ status: 'ok' }));

// Start server
app.listen(4000, () =&amp;gt; {
  console.log('Server running on http://localhost:4000');
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And this time, it ran perfectly.&lt;br&gt;
No rewrites. No fixes. Just clean, working code.&lt;/p&gt;

&lt;h1&gt;
  
  
  Why This Works
&lt;/h1&gt;

&lt;p&gt;Large language models like the ones &lt;a href="https://docs.continue.dev/customize/model-providers/overview" rel="noopener noreferrer"&gt;Continue supports&lt;/a&gt; don’t think, &lt;a href="https://medium.com/@paulosalem/its-just-predicting-the-next-token-c05b8cbe4eea" rel="noopener noreferrer"&gt;they predict&lt;/a&gt;. The clearer your prompt, the narrower the prediction range, and the higher your accuracy.&lt;br&gt;
When you give Continuous AI &lt;a href="https://blog.continue.dev/how-to-use-an-llm-while-coding/" rel="noopener noreferrer"&gt;context, constraints, and intent&lt;/a&gt;, you help it eliminate wrong guesses.&lt;br&gt;
In practice, it looks like this:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Prompt Element&lt;/th&gt;
&lt;th&gt;Example&lt;/th&gt;
&lt;th&gt;Why it matters&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Context&lt;/td&gt;
&lt;td&gt;“Using ES modules in Node.js…”&lt;/td&gt;
&lt;td&gt;Avoids incompatible syntax&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Constraints&lt;/td&gt;
&lt;td&gt;“Only include routes / and /health”&lt;/td&gt;
&lt;td&gt;Keeps code focused&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Intent&lt;/td&gt;
&lt;td&gt;“I just want a lightweight scaffold for testing APIs.”&lt;/td&gt;
&lt;td&gt;Prevents over-engineering&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Coding assistants like Continue also gain an edge by reading your actual codebase in VS Code, so they don't need to guess your environment if you let them see the files. That means your next prompt becomes part of a conversation rather than an isolated question.&lt;/p&gt;

&lt;h2&gt;
  
  
  Best Practices For High Accuracy
&lt;/h2&gt;

&lt;p&gt;Generally, you should follow these guidelines to get the most accurate responses from your coding assistant:&lt;/p&gt;

&lt;h4&gt;
  
  
  Write Clear, Goal-oriented Prompts
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Start with the problem statement (“Write a function that validates requests”)&lt;/li&gt;
&lt;li&gt;Specify the language and framework (“in JavaScript using Joi”)&lt;/li&gt;
&lt;li&gt;Describe constraints and expected behaviour (“must validate all requests, authorized or unauthorized”)&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Provide Adequate Context
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Reference related functions if needed (“use the existing database schema to check authorized or unauthorized requests”)&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Break Tasks Down
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Request for smaller testable parts instead of entire modules.
Example: ask for “the SQL query first” before “the full Express route handler.”&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Refine And Improve
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;If output is off, clarify what went wrong in the following prompt(“It doesn’t handle empty input; fix that”).&lt;/li&gt;
&lt;li&gt;Each feedback loop improves accuracy significantly.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With a coding assistant like Continuous AI, you can access models configured for various roles in the &lt;a href="https://marketplace.visualstudio.com/items?itemName=Continue.continue" rel="noopener noreferrer"&gt;Continue extension&lt;/a&gt;. This makes it easy to switch between &lt;a href="https://docs.continue.dev/customize/model-roles/00-intro" rel="noopener noreferrer"&gt;chatting, autocompleting code suggestions&lt;/a&gt;, applying edits to a file, and more. When you describe what you want (e.g., “add authentication middleware” or “optimize this query”) and switch to the desired role, it understands your intent and acts accordingly. &lt;/p&gt;

&lt;p&gt;Overall, AI coding assistants can only amplify your skill, they don’t replace it. Hence, when working with AI coding assistants, remember to always read and reason through the generated code before running it. The key to 90% accuracy isn’t automatic; it’s clarity, validation, and iteration.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Building a Custom MCP Server in Continue: A Step-by-Step Guide</title>
      <dc:creator>Anita Ihuman  🌼</dc:creator>
      <pubDate>Fri, 10 Oct 2025 17:03:24 +0000</pubDate>
      <link>https://dev.to/anita_ihuman/building-a-custom-mcp-server-in-continue-a-step-by-step-guide-1p71</link>
      <guid>https://dev.to/anita_ihuman/building-a-custom-mcp-server-in-continue-a-step-by-step-guide-1p71</guid>
      <description>&lt;p&gt;Imagine you hire a world-renowned and brilliant historian who has memorized every textbook ever written up until 2023. They can tell you about any historical event in detail. However, when you ask them how many github repostory have been created in the last 5 years, or what the stock prices are or about the latest weather conditions in your city, they have absolutely no clue. Their knowledge is immense, but static.&lt;/p&gt;

&lt;p&gt;Similarly, &lt;a href="https://www.databricks.com/glossary/machine-learning-models" rel="noopener noreferrer"&gt;machine learning models&lt;/a&gt; are brilliant, yet isolated from real-time data. The problem is that they are often not trained on the specific information or context of the question you are asking. &lt;/p&gt;

&lt;p&gt;For AI agents to obtain up-to-date context on a subject, they rely on &lt;a href="https://blog.continue.dev/model-context-protocol/" rel="noopener noreferrer"&gt;Model Context Protocol &lt;/a&gt;(MCP) servers to interoperate with external tools and resources (SDKs, APIs, and front-end systems). In this article, we’ll discuss what MCPs are and show you a step-by-step guide on how to build your custom &lt;a href="https://github.com/modelcontextprotocol/typescript-sdk" rel="noopener noreferrer"&gt;MCP server in TypeScript&lt;/a&gt; Continuous AI. &lt;/p&gt;

&lt;h1&gt;
  
  
  What is a Model Context Protocol (MCP)?
&lt;/h1&gt;

&lt;p&gt;The Model Context Protocol (MCP) is an open protocol that enables large language models (LLMs) to securely connect with external data sources, tools, and environments.  Similar to design patterns like &lt;a href="https://blogs.nvidia.com/blog/what-is-retrieval-augmented-generation/" rel="noopener noreferrer"&gt;Retrieval-Augmented Generation &lt;/a&gt;(RAG), tool calling, and &lt;a href="https://www.ibm.com/think/topics/ai-agents" rel="noopener noreferrer"&gt;AI agents&lt;/a&gt;, MCPs provide more contextual data to LLMs. &lt;/p&gt;

&lt;p&gt;MCP was &lt;a href="https://www.anthropic.com/news/model-context-protocol" rel="noopener noreferrer"&gt;developed by Anthropic&lt;/a&gt; and has been adopted by major industry players such as Continue, OpenAI, and others. MCP provides a standardized way for AI models to access real-time information and perform actions beyond their static training data. An MCP defines key components like tools (functions or actions an AI can execute), resources (contextual data such as files or schemas), and prompts (predefined templates guiding AI behavior).  &lt;/p&gt;

&lt;h4&gt;
  
  
  Problem Statement:
&lt;/h4&gt;

&lt;p&gt;When you ask, Continue with the current exchange rate for USD, this is what you will get. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhpjnkl1l34807b87acv5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhpjnkl1l34807b87acv5.png" alt=" " width="682" height="1342"&gt;&lt;/a&gt;&lt;br&gt;
To address this challenge, let's build a custom exchange rate MCP server that will help Continue to access live data from the exchange rate API and provide us with real-time rates for 168 world currencies in our VSCode. By the end, we should be able to get real-time data on current exchange rates for USD for different currencies. The steps in this article can be used to create any MCP server of your choice.&lt;/p&gt;
&lt;h3&gt;
  
  
  System Requirements and Tools
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Basic knowledge of TypeScript/JavaScript &lt;/li&gt;
&lt;li&gt;Node.js is installed on your machine (version 16+).&lt;/li&gt;
&lt;li&gt;VS Code installed&lt;/li&gt;
&lt;li&gt;An exchange rate API access&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Step 1:  Initial Setup and Environment Configuration
&lt;/h2&gt;

&lt;p&gt;Let's start by setting up our development environment.&lt;/p&gt;

&lt;p&gt;a. &lt;strong&gt;Create a project directory.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Open your terminal to create a new directory and initialise with NPM using the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir exchaterate-mcp-server
cd exchangerate-mcp-server
npm init -y
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;b. &lt;strong&gt;Create an entry point file&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Inside the new directory you created, make a src/ directory that will hold our code. Inside this folder, create a file that will serve as our entry point.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir src

touch src/main.ts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;c. &lt;strong&gt;Configure the package.json file&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The MCP SDK requires JavaScript modules, so open the package.json file and add ‘"type": "module" ' to enable ES module syntax.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
 "name": "exchangerate-mcp-server",
 "version": "1.0.0",
 "type": "module",
 "description": "",
 "main": "index.js",
 "scripts": {
   "test": "echo \"Error: no test specified\" &amp;amp;&amp;amp; exit 1"
 },
 "keywords": [],
 "author": "",
 "license": "ISC",
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;d. &lt;strong&gt;Install dependencies&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The MCP server requires two key libraries to be installed. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;a href="https://modelcontextprotocol.io/docs/sdk" rel="noopener noreferrer"&gt;MCP SDK&lt;/a&gt;, which will provide everything we need for server functionality &lt;/li&gt;
&lt;li&gt;And &lt;a href="https://zod.dev/" rel="noopener noreferrer"&gt;Zod&lt;/a&gt; for schema validation.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install @modelcontextprotocol/sdk

npm install zod
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is what your config.js file should look like once both libraries are installed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Creating Your MCP Server in TypeScript
&lt;/h2&gt;

&lt;p&gt;For the next step, we need to create the MCP server. &lt;br&gt;
Open your main.ts file in VSCode and update it with the following steps to define your server.&lt;/p&gt;

&lt;p&gt;a. &lt;strong&gt;Import all required modules&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You will need to specify the necessary classes from the installed packages.&lt;br&gt;
This sets up a new MCP server and imports &lt;code&gt;McpServer&lt;/code&gt; and &lt;code&gt;StdioServerTransport&lt;/code&gt; from the MCP SDK.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";

import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';

import { z } from "zod";
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;b. &lt;strong&gt;Create server instance&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The server handles all communication over the MCP protocol between clients (such as VS Code) and your tools. So you need to create an instance of MCPServer, specifying its name and version. &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F023cwdbk77g2briwwp34.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F023cwdbk77g2briwwp34.png" alt=" " width="800" height="260"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;c. &lt;strong&gt;Define the tool&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Next, we need to define the functions (or tools) that the AI agent will call. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a tool called “&lt;em&gt;currency-live-rates&lt;/em&gt;”. 
For now, we'll use static data to ensure the basic setup is working as expected.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;server.tool(
  'currency-live-rates',
  'Tool to get the live exchange rate of a base currency (static example)',
  {
    base: z.string().describe("The base currency code (e.g., USD, EUR, GBP)."),
    symbols: z.string().optional().describe("Optional comma-separated list of target currencies (e.g., GBP,JPY).")
  },
  async ({ base, symbols }) =&amp;gt; {
    // Static mock data for demonstration
    return {
      content: [
        {
          type: "json",
          text: JSON.stringify(
            {
              success: true,
              terms: "https://exchangerate.host/terms",
              privacy: "https://exchangerate.host/privacy",
              timestamp: 1727457600,
              source: base,
              quotes: {
                [`${base}USD`]: 1.0,
                [`${base}EUR`]: 0.92,
                [`${base}GBP`]: 0.81,
                [`${base}JPY`]: 148.56
              }
            },
            null,
            2
          )
        }
      ]
    };
  }
);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  What we have basically done here is:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;We are registering the tool with the server using server.tool. We gave it a unique ID (&lt;code&gt;currency-live-rates&lt;/code&gt;), a description, a parameter schema, and a callback function. &lt;/li&gt;
&lt;li&gt;Using Zod, the parameter properties define and validate the expected input. The tool expects an object with a required string property &lt;code&gt;base&lt;/code&gt; (the base currency code) and an optional string property &lt;code&gt;symbols&lt;/code&gt; (a comma-separated list of target currencies).&lt;/li&gt;
&lt;li&gt;When the tool is called, the server validates the input against the schema. If the input is valid, it calls the callback function.&lt;/li&gt;
&lt;li&gt;The callback receives the validated parameters and then creates a static response with currency exchange rates for the specified base currency. The response includes example rates for USD, EUR, GBP, and JPY.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;d. &lt;strong&gt;Set up transport and connection&lt;/strong&gt;&lt;br&gt;
The next step is to define how the server communicates with AI clients. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In this case, use the StdioServerTransport for communication (reading from standard input and writing to standard output) in your local development.&lt;/li&gt;
&lt;li&gt;Then connect to the server.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const transport = new STDioServerTransport();
server.connect(transport);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Your main.ts file should look like this once that is completed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
import { z } from "zod";

const server = new McpServer({
  name: "Currency Live Rates Server",
  version: "1.0.0",
});


server.tool(
  'currency-live-rates',
  'Tool to get the live exchange rate of a base currency (static example)',
  {
    base: z.string().describe("The base currency code (e.g., USD, EUR, GBP)."),
    symbols: z.string().optional().describe("Optional comma-separated list of target currencies (e.g., GBP,JPY).")
  },
  async ({ base, symbols }) =&amp;gt; {
    // Static mock data for demonstration
    return {
      content: [
        {
          type: "json",
          text: JSON.stringify(
            {
              success: true,
              terms: "https://exchangerate.host/terms",
              privacy: "https://exchangerate.host/privacy",
              timestamp: 1727457600,
              source: base,
              quotes: {
                [`${base}USD`]: 1.0,
                [`${base}EUR`]: 0.92,
                [`${base}GBP`]: 0.81,
                [`${base}JPY`]: 148.56
              }
            },
            null,
            2
          )
        }
      ]
    };
  }
);

const transport = new StdioServerTransport();
server.connect(transport);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 4: Testing with MCP Inspector
&lt;/h2&gt;

&lt;p&gt;For the next step, we need to confirm that what we have so far works as expected before we can input real exchange rate data.&lt;br&gt;
To do this, we will utilize the&lt;a href="https://modelcontextprotocol.io/docs/tools/inspector" rel="noopener noreferrer"&gt; MCP&lt;/a&gt; Inspector, a web-based tool used for testing and debugging MCP servers. &lt;/p&gt;

&lt;p&gt;a. &lt;strong&gt;Run the server with inspector&lt;/strong&gt;&lt;br&gt;
Run the following command in your terminal to open the MCP inspector:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npx -y @modelcontextprotocol/inspector npx -y tsx main.ts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;b. &lt;strong&gt;Connect and test with the inspector&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The output should look like the screenshot below.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;With a localhost URL for the proxy server. Click on the URL that includes the session token (avoids manual entry) to open the Inspector in your browser. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgw5xlcypa5efpo9hde70.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgw5xlcypa5efpo9hde70.png" alt=" " width="800" height="248"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click “Connect” in the Inspector UI.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fngo3hhhhxm20m1qdbk9g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fngo3hhhhxm20m1qdbk9g.png" alt=" " width="800" height="424"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Navigate to ‘Tools’ and click ‘List Tools’. You should see &lt;code&gt;currency-live-rates&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Click on ‘&lt;code&gt;currency-live-rates&lt;/code&gt;’ to open the tool caller.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvu1k8d2ipucatbrtkjm0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvu1k8d2ipucatbrtkjm0.png" alt=" " width="800" height="472"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Now, test it out by entering a base currency (e.g., USD) and optionally a list of target currencies (e.g., EUR, GBP, JPY) in the input fields, then click ‘Run Tool’.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You will get an output that shows the live exchange rates in JSON format, like this:&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fylqifvvkc40yiijma480.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fylqifvvkc40yiijma480.png" alt=" " width="800" height="468"&gt;&lt;/a&gt;&lt;br&gt;
Once you have confirmed that it works as expected, the next step is to use actual data for our server.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 5: Fetching Realtime Exchange Rate Data
&lt;/h2&gt;

&lt;p&gt;Now that we have confirmed our code sample works as expected, it's time to input actual data from &lt;a href="https://exchangerate.host/" rel="noopener noreferrer"&gt;exchangeratehost&lt;/a&gt;, a foreign exchange &amp;amp; crypto rates data solution. &lt;/p&gt;

&lt;p&gt;a. &lt;strong&gt;How the Exchangeratehost API works&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Exchangeratehost API provides live updates on the exchange rate between different currencies.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You can specify the base currency (for example, USD) you want to obtain other rates against.&lt;/li&gt;
&lt;li&gt;You can also tell it which other currencies you want to compare it with (for example, EUR, GBP, JPY).&lt;/li&gt;
&lt;li&gt;The API then sends back a response that includes the latest rates.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We will be using the real-time exchange rates API endpoint for our MCP server. Once you have created an account, you will be assigned a personal API access key, which you can attach to our preferred API endpoint URL.&lt;br&gt;
Check the Exchangeratehost &lt;a href="https://exchangerate.host/documentation" rel="noopener noreferrer"&gt;API documentation&lt;/a&gt; for details.&lt;/p&gt;

&lt;p&gt;b. &lt;strong&gt;Update main.ts file&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Replace the tool example function with the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";


const server = new McpServer({
 name: "Currency Live Rates Server",
 version: "1.0.0",
});




server.tool(
 "currency-live-rates",
 "Get real-time exchange rates for a given base currency",
 {
   base: z
     .string()
     .describe("The base currency code (e.g., USD, EUR, GBP). Defaults to EUR if not provided."),
   symbols: z
     .string()
     .optional()
     .describe("Comma-separated list of target currencies (e.g., USD,GBP,JPY). Optional."),
 },
 async ({ base, symbols }) =&amp;gt; {
   try {
     // Build the URL dynamically
     let url = `https://api.exchangerate.host/live?access_key=APIKEY`;
     if (symbols) {
       url += `&amp;amp;currencies=${symbols}`;
     }


     // Fetch live exchange rates
     const response = await fetch(url);
     const data = await response.json();


     if (!data.success) {
       return {
         content: [
           {
             type: "text",
             text: `Error: Could not fetch live rates for ${base}. Please check your input.`,
           },
         ],
       };
     }


     // Format output: show base + filtered rates
     const rates = Object.entries(data.quotes)
       .map(([pair, rate]) =&amp;gt; `${pair}: ${rate}`)
       .join("\n");


     return {
       content: [
         {
           type: "text",
           text: `Real-time rates (Base: ${base})\n\n${rates}`,
         },
       ],
     };
   } catch (error: any) {
     return {
       content: [
         {
           type: "text",
           text: `Error fetching live exchange rates: ${error.message}`,
         },
       ],
     };
   }
 }
);


const transport = new StdioServerTransport();
server.connect(transport);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;c. &lt;strong&gt;Test real data on the MCP inspector&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Restart your server by stopping the running process in the terminal&lt;/li&gt;
&lt;li&gt;In the MCP inspector, click on connect again (similar steps to before)&lt;/li&gt;
&lt;li&gt;Run the currency-live-rates tool by entering a base currency (e.g., "USD") and optionally some target currencies (e.g., "EUR, GBP, JPY").
Your tool result should look like this: &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fehv5egez5562nwab30c2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fehv5egez5562nwab30c2.png" alt=" " width="800" height="475"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6: Setting Up MCP in Continue
&lt;/h2&gt;

&lt;p&gt;Once your MCP server is running, you can connect it to a tool-enabled AI agent, allowing it to utilize your currency exchange rate capability. We are using the Continue VSCode assistant for this example.  &lt;/p&gt;

&lt;p&gt;a. &lt;strong&gt;Installing and Configuring Continue in VS Code&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you have not already, install the Continue. Continue is available in VS Code and JetBrains, so whichever IDE you use, you should be able to find it as an extension.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd0g9kg7el02vtpee5moc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd0g9kg7el02vtpee5moc.png" alt=" " width="800" height="536"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Continue extension defaults to the left sidebar upon -installation. drand and drop the icon panel to the right sidebar.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcbyvwdrbvam6946l5zm7.gif" alt=" " width="720" height="414"&gt;
&lt;/li&gt;
&lt;li&gt;Next, need to sign up or sign in to the &lt;a href="https://docs.continue.dev/hub/introduction" rel="noopener noreferrer"&gt;Continue Hub&lt;/a&gt;, where you can discover, configure, and manage the models, rules, and MCP tools you need for your AI coding agent.&lt;/li&gt;
&lt;li&gt;Once signed in, navigate this IDE dropdown and select the exact IDE you are working with. &lt;/li&gt;
&lt;li&gt;You will be met with the option to &lt;a href="https://docs.continue.dev/hub/governance/creating-an-org" rel="noopener noreferrer"&gt;create an organization&lt;/a&gt; or to use the community organisation available. I will be using the Continue Community for this example as it comes with predefine agents you need to install. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8nmnt4urn0g1zw8uoa9k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8nmnt4urn0g1zw8uoa9k.png" alt=" " width="800" height="517"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You should be redirected back to VSCode. Continue panel should have a dropdown that lists the agents you installed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzdloqja6114eb00w281q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzdloqja6114eb00w281q.png" alt=" " width="754" height="914"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;b. &lt;strong&gt;Create an MCP server&lt;/strong&gt;&lt;br&gt;
Now that we have set up Continue, lets create a new MCP that will help the AI agent access the real time data we need from the exchange rate API.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Navigate to the setting on the continue panel, down to the tools icon. You will see a plus button where you can add new servers. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbne5pxomaonqj8gqqrx7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbne5pxomaonqj8gqqrx7.png" alt=" " width="800" height="749"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This should take you back to the Continue hub where their are a variety of available MCP server you choose from. However, since we already created our own MCP server locally. We just need to add a configuration file that will tell Continue  to access the content of our local server.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;c. &lt;strong&gt;Create a new folder in the working directory called Continue&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a new file within this folder called currency-mcp.yaml.&lt;/li&gt;
&lt;li&gt;Input the following configurations into the yaml file created.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Name: MCP name
version: 0.0.1
schema: v1
mcpServers:
  - name: MCP Name
    command: npx
    args:
      - -y
      - "/Users/path/currency-mcp-server/main.ts" // change to the path your mcp code typescript code


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;This configuration tells the system to start an MCP server by running the TypeScript file at the given path using npx, with the -y flag to skip prompts.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;d. &lt;strong&gt;Run the server&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For the following command in your terminal to run the server.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npx -y main.ts

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;To confirm that everything is working as expected, check the under tools in Continue again, your  MCP should be added to the list like this.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fibbljs3hg6mcfu3bx6ow.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fibbljs3hg6mcfu3bx6ow.png" alt=" " width="800" height="438"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Result?
&lt;/h2&gt;

&lt;p&gt;Now that we have completed our MCP server, let's take a look at the final result. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to your Continue panel and make sure you're in "Agent Mode." This will tell the AI that it's okay to start reasoning and using external tools, not just its internal knowledge.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo0dh4zjshkgp5594offb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo0dh4zjshkgp5594offb.png" alt=" " width="780" height="316"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When you give continuous AI the prompt “What are the current exchange rates for USD” to test the MCP server, it will examine your question and realize it does not have live data on the currency exchange rate. However, it now has a tool called currency-live-rates that can retrieve it.&lt;/li&gt;
&lt;li&gt;Since our tool interacts with real-time data, Continue will request permission to use your currency-live-rates tool to retrieve the data. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjr0kzgtli32fs3weksua.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjr0kzgtli32fs3weksua.png" alt=" " width="800" height="651"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Instead of JSON format, Continue will now return a refined context that summarizes the list of current exchange rates for USD.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5xpjuudmthib4ivz5h5f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5xpjuudmthib4ivz5h5f.png" alt=" " width="800" height="919"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next?
&lt;/h2&gt;

&lt;p&gt;Building a custom MCP server is a great way to unlock the full potential of AI-assisted development in your own environment. If you are new to MCPs and want to explore, your best bet is to start experimenting with internal MCP servers tailored to your needs. &lt;br&gt;
Continuous AI is open source, so you can learn more about this tool, access templates, and contribute to the ecosystem through the &lt;a href="https://github.com/continuedev/continue" rel="noopener noreferrer"&gt;Continue GitHub repository&lt;/a&gt;. You can also connect with other builders and share ideas in the &lt;a href="https://discord.gg/vapESyrFmJ" rel="noopener noreferrer"&gt;Continue Discord community&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>javascript</category>
    </item>
    <item>
      <title>How are Different Developers Using AI Coding Assistants?</title>
      <dc:creator>Anita Ihuman  🌼</dc:creator>
      <pubDate>Tue, 30 Sep 2025 07:53:42 +0000</pubDate>
      <link>https://dev.to/anita_ihuman/how-are-different-developers-using-ai-coding-assistants-4mfa</link>
      <guid>https://dev.to/anita_ihuman/how-are-different-developers-using-ai-coding-assistants-4mfa</guid>
      <description>&lt;p&gt;AI coding assistants like Continue, Codex, GitHub Copilot, Windsurf, and Devin have quickly shifted from experimental to everyday developer tools. While some still see them as nothing more than magic text/code generators or even fear they’ll replace human developers, the reality looks different. Developers aren’t just adopting and leveraging AI for quick snippets or vibe coding sessions. They’re using them for building and designing production-viable systems, coding, testing, and even documentation.&lt;/p&gt;

&lt;p&gt;Recent industry surveys show that between 76% and 89% of developers already use these tools or plan to soon, and &lt;a href="https://www.opslevel.com/resources/ai-coding-assistants-are-everywhere-are-devs-really-using-them" rel="noopener noreferrer"&gt;94% of companies surveyed&lt;/a&gt; have some teams actively using AI coding assistants. Despite debates about accuracy, many find them invaluable for boosting productivity, reducing repetitive work, and shortening time-to-market.&lt;/p&gt;

&lt;p&gt;In this article, we’ll explore how these tools are shaping the day-to-day lives of developers at every level, from startup founders to senior engineers and junior developers contributing to open source.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Are AI Code Assistants?
&lt;/h2&gt;

&lt;p&gt;AI coding assistants, otherwise known as coding companions, are tools powered &lt;a href="https://blog.continue.dev/how-to-use-an-llm-while-coding/" rel="noopener noreferrer"&gt;by large language models &lt;/a&gt;(LLMs) trained on massive amounts of code and natural language (Java and Python). The difference from traditional autocomplete is the intelligence. Instead of just &lt;a href="https://docs.continue.dev/features/autocomplete/quick-start" rel="noopener noreferrer"&gt;filling in the next word or bracket&lt;/a&gt;, AI assistants can &lt;a href="https://docs.continue.dev/guides/codebase-documentation-awareness#how-to-make-your-agent-aware-of-codebases-and-documentation" rel="noopener noreferrer"&gt;understand context&lt;/a&gt;, spot patterns, and help solve bigger problems. &lt;/p&gt;

&lt;p&gt;Software developers utilize AI coding to integrate AI into their workflows, achieving higher productivity and quality while also learning new concepts and insights. Most of these AI assistants are integrated into familiar environments where developers already work, such as Visual Studio Code, JetBrains IntelliJ IDEA, or even the command line.&lt;br&gt;
They offer a growing set of capabilities that span the entire development workflow, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Generating code snippets to match a developer’s request.&lt;/li&gt;
&lt;li&gt;Building test cases that cover multiple scenarios.&lt;/li&gt;
&lt;li&gt;Translating code across programming languages.&lt;/li&gt;
&lt;li&gt;Upgrading legacy code to newer versions.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.continue.dev/features/chat/quick-start" rel="noopener noreferrer"&gt;Explaining code&lt;/a&gt; in plain language to support learning.&lt;/li&gt;
&lt;li&gt;Creating documentation to speed up DevOps processes.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Who Is Actually Using These Tools?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The Start-Up Founder
&lt;/h3&gt;

&lt;p&gt;A lot of people have had brilliant project ideas, only for them to remain stuck in their notebooks. This is often the case for most non-technical founders. Without a clear path to execution, many great concepts simply fade away. &lt;/p&gt;

&lt;p&gt;However, in the past years, with the prevalent use of AI models and coding assistants, it has become easier for non-technical founders to move from ideation to releasing a product MVP. &lt;br&gt;
A study by Cornell University found that tools like ChatGPT, Continue, and GitHub Copilot can generate code that runs without errors about &lt;a href="https://arxiv.org/abs/2304.10778" rel="noopener noreferrer"&gt;90% of the time&lt;/a&gt;. This surpasses 30% to 65% of unit tests and are secure around 60% of the time. With the right prompts, these tools can be leveraged by anyone, even those with no technical background, to create code samples right in their IDE or CLI.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;For instance, Mary is a non-technical founder with a brilliant idea for a new productivity app. She knows her market inside out but struggles with initially developing the source code. With a lack of funds available to her, hiring a developer is out of budget, and online tutorials are overwhelming and will take longer before the idea goes live.&lt;br&gt;
Instead, Mary can turn to AI website builders like &lt;a href="https://lovable.dev/" rel="noopener noreferrer"&gt;Lovable&lt;/a&gt;. By simply describing the app’s features, core functions, and desired outcomes, she can generate a ready-made user interface in HTML, CSS, and JavaScript. She can then take that code into her local IDE. With an AI coding assistant like Continue, which integrates directly into her development environment, Mary can refine the application further, add new features, and even deploy it seamlessly using platforms like Netlify or GitHub.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkw2kfc9xq06a72egs3v8.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkw2kfc9xq06a72egs3v8.gif" alt=" " width="720" height="501"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Just like that, Mary’s idea can become a real, existing project she can share with the world. Using a coding assistant like &lt;a href="https://www.continue.dev/" rel="noopener noreferrer"&gt;Continue &lt;/a&gt;allows you to build an MVP in record time. Since Continue runs on &lt;a href="https://docs.continue.dev/guides/how-to-self-host-a-model" rel="noopener noreferrer"&gt;local models&lt;/a&gt; (on her own computer, instead of sending data to the cloud), it also keeps Mary’s original ideas safe.&lt;/p&gt;

&lt;h3&gt;
  
  
  Junior Developer
&lt;/h3&gt;

&lt;p&gt;Real-world, legacy open-source code written in R, Python, JavaScript, or Go applications isn't like the clean, simple, controlled projects from boot camps. Maintainers aren't always available to answer every question, and getting a timely response on Stack Overflow can be time-consuming. AI coding assistants like Continue can bridge the gap by helping juniors quickly grasp a new language’s syntax or a framework’s API without drowning in endless documentation.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Take David, for example. He ran into a puzzling TypeError in his addToCart function while contributing to an open-source e-commerce project. Instead of just copy-pasting the error into Google, &lt;a href="https://docs.continue.dev/features/chat/quick-start" rel="noopener noreferrer"&gt;he asked the AI&lt;/a&gt; coding assistant directly in VSCode, “Why am I getting this TypeError in my addToCart function?” The AI assistant will then break down the concept of data types in JavaScript, explain why his input was invalid, and suggest a quick fix.&lt;br&gt;
This will turn a frustrating roadblock into an opportunity for David to actually understand the problem, making it a good learning experience.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi24t19fwkudgulsp7a6t.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi24t19fwkudgulsp7a6t.gif" alt=" " width="600" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However, there’s a caveat. These tools don’t replace the need for a solid foundation in programming. Research by &lt;a href="https://www.us.elsevierhealth.com/sciencedirectai.html?utm_source=ScienceDirect&amp;amp;utm_medium=Pendo&amp;amp;utm_campaign=PendoK14" rel="noopener noreferrer"&gt;Computers in Human Behavior &lt;/a&gt;indicates that a student's frequent use of AI chatbots for programming tasks is associated with a negative correlation in their academic performance. These tools exist to help implement ideas and learn more quickly, but the core knowledge still needs to come from you.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Senior Engineer
&lt;/h3&gt;

&lt;p&gt;Senior engineers often face a different challenge. For them, it’s not just about learning syntax or getting started. It’s about managing complexity at scale and fast. Many engineering leaders are skeptical of AI coding tools, as they worry that these tools could weaken their team’s skills or compromise security. But here’s the thing: AI assistants are incredibly fast, and while speed is their forte, the senior engineer is well-versed in the fundamental skills. And in practice, &lt;a href="https://docs.continue.dev/features/edit/quick-start" rel="noopener noreferrer"&gt;editing and refactoring code&lt;/a&gt; is often quicker than writing it from scratch, which is where Continue comes in handy.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Take Sarah, a senior staff engineer at a legacy cloud platform. She can use the Continue &lt;a href="https://docs.continue.dev/guides/cli" rel="noopener noreferrer"&gt;CLI version&lt;/a&gt; to &lt;a href="https://docs.continue.dev/guides/snyk-mcp-continue-cookbook" rel="noopener noreferrer"&gt;scan&lt;/a&gt; and refactor the legacy module that no one wants to touch. Based on her prompts, Continue will then identify the inefficiencies and suggest modern patterns in the terminal. Sarah can review these changes and determine if they align with her expected outcome. By deploying the AI with her team's specific models on their private cloud, she can ensure the team's sensitive intellectual property remains secure. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa9uderfn99h02g401i6f.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa9uderfn99h02g401i6f.gif" alt=" " width="600" height="421"&gt;&lt;/a&gt;&lt;br&gt;
For Sarah, the AI didn’t replace her expertise, but rather amplified it, helping her tackle bigger problems and boosting her team’s overall output.&lt;/p&gt;

&lt;h2&gt;
  
  
  In Summary,
&lt;/h2&gt;

&lt;p&gt;AI coding assistants are becoming everyday companions for developers at every level. Their  potential impact is evidenced as Gartner projects that “&lt;a href="https://www.gartner.com/en/documents/5298563?utm_source=the+new+stack&amp;amp;utm_medium=referral&amp;amp;utm_content=inline-mention&amp;amp;utm_campaign=tns+platform" rel="noopener noreferrer"&gt;90% of enterprise software&lt;/a&gt; engineers will utilize AI code assistants by 2028, a significant increase from less than 14% in early 2024.” Developers can now utilize AI coding assistants to automate repetitive tasks, such as creating standard blocks of code and generating routine documentation. &lt;/p&gt;

&lt;p&gt;Ultimately, the goal of these tools is not to replace human creativity, judgment, or skill, but to free up development time, accelerate delivery, and open new paths for learning and creativity.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>programming</category>
    </item>
    <item>
      <title>How to Get More Done with AI in My Local Dev Workflow: ChatGPT vs. Continue</title>
      <dc:creator>Anita Ihuman  🌼</dc:creator>
      <pubDate>Mon, 22 Sep 2025 12:46:00 +0000</pubDate>
      <link>https://dev.to/anita_ihuman/how-to-get-more-done-with-ai-in-my-local-dev-workflow-chatgpt-vs-continue-149f</link>
      <guid>https://dev.to/anita_ihuman/how-to-get-more-done-with-ai-in-my-local-dev-workflow-chatgpt-vs-continue-149f</guid>
      <description>&lt;p&gt;When AI coding agents first started buzzing around, I was skeptical. I thought, "&lt;em&gt;Cool, another bot to generate ‘Hello World’ in 50 languages.&lt;/em&gt;" But once I started using one, my perspective completely changed. &lt;/p&gt;

&lt;p&gt;I'd compare the experience to moving from endlessly Googling "&lt;em&gt;why am I getting this error&lt;/em&gt;" to having a pair-programming partner in your editor, point at the bug, and say, "&lt;em&gt;See that bug on line 42? This is how to fix it.&lt;/em&gt;"&lt;/p&gt;

&lt;p&gt;I began exploring more AI coding agents and rethinking my development workflow when I came across &lt;a href="https://www.continue.dev/" rel="noopener noreferrer"&gt;Continue.dev&lt;/a&gt;, an open-source AI agent. I wondered how it stacked up against the popular ChatGPT. In this article, I will discuss what distinguishes these two tools. &lt;/p&gt;




&lt;h3&gt;
  
  
  Tl;dr: Comparison of ChatGPT Vs Continue.
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature/Aspect&lt;/th&gt;
&lt;th&gt;ChatGPT&lt;/th&gt;
&lt;th&gt;Continue&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Where it lives&lt;/td&gt;
&lt;td&gt;Browser (web-based)&lt;/td&gt;
&lt;td&gt;Inside your IDE (VS Code, JetBrains)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Context awareness&lt;/td&gt;
&lt;td&gt;None (only what you paste)&lt;/td&gt;
&lt;td&gt;Full project view (files, dependencies, structure&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Customization&lt;/td&gt;
&lt;td&gt;Limited (prompt engineering only)&lt;/td&gt;
&lt;td&gt;Highly customizable (choose LLM, build actions)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Learning curve&lt;/td&gt;
&lt;td&gt;Very low (just ask questions)&lt;/td&gt;
&lt;td&gt;Medium (requires setup + tinkering)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h3&gt;
  
  
  ChatGPT: The Generalist
&lt;/h3&gt;

&lt;p&gt;You've probably used it before. &lt;a href="https://chatgpt.com/" rel="noopener noreferrer"&gt;ChatGPT&lt;/a&gt; is a conversational AI chatbot developed by OpenAI. It uses a &lt;a href="https://www.ibm.com/think/topics/large-language-models" rel="noopener noreferrer"&gt;large language model &lt;/a&gt;(LLM) to understand and generate text in response to user prompts and can be used for a wide range of tasks. ChatGPT is excellent for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Brainstorming and Learning:&lt;/strong&gt; You can ask it to explain a complex algorithm, help you understand a new programming concept, or even draft documentation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Quick Answers:&lt;/strong&gt; You can use it to write a quick script in a language you're not familiar with. Its massive training data makes it easy to tackle a wide variety of tasks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Simple Debugging:&lt;/strong&gt; You can paste in a snippet of code and ask, "Why isn't this working?" and it will usually give you an extensive explanation.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The Downsides of Using ChatGPT for Local Dev
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Lost Context:&lt;/strong&gt; When you ask a question, it gives an answer. However, it doesn’t know your actual codebase, your project structure, or how different files relate to each other. This results in vague responses to specific prompts that require depth on the project structure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Interrupts Your Flow:&lt;/strong&gt; Developers often struggle to maintain momentum when they have to leave their editor, switch windows, and copy and paste code to find the right prompts. This constant back-and-forth can be a major source of frustration.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Limited Integration with Real Tools:&lt;/strong&gt; ChatGPT is a chatbot, not a dev tool. It can’t open files, run your test suite, or suggest direct, in-line changes in your editor.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security &amp;amp; Privacy Concerns:&lt;/strong&gt; For many developers, especially those in enterprise teams, pasting proprietary or sensitive code into a public website is a non-starter. Some companies even ban the use of such tools entirely due to security and privacy policies.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  ChatGPT is Best Suited For
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Beginners or students learning new concepts.&lt;/li&gt;
&lt;li&gt;Developers who want a general-purpose AI helper.&lt;/li&gt;
&lt;li&gt;Quick, out-of-context coding or non-coding tasks (docs, brainstorming, etc.).&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Continue: An Integrated Coding Assistant
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.continue.dev/" rel="noopener noreferrer"&gt;Continue&lt;/a&gt; is an AI coding agent that lives right in your IDE (like &lt;a href="https://marketplace.visualstudio.com/items?itemName=Continue.continue" rel="noopener noreferrer"&gt;VS Code&lt;/a&gt; or &lt;a href="https://plugins.jetbrains.com/plugin/22707-continue" rel="noopener noreferrer"&gt;JetBrains&lt;/a&gt;) and &lt;a href="https://docs.continue.dev/guides/cli" rel="noopener noreferrer"&gt;CLI&lt;/a&gt;. It is built to enhance development workflows, be deeply customizable, and continuously learn from development data. With Continue, you can connect to and use any LLM (e.g. OpenAI, Claude, or LLama for Chat) to provide code context and enable auto-completion. Continue shines at:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Understanding Your Project:&lt;/strong&gt; Unlike ChatGPT, Continue sees the snippet you're working on and reads the entire project, including all your files, dependencies, and code structure. This means when you ask it for help, it gives you context-aware suggestions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Helping you maintain your flow:&lt;/strong&gt; You don't have to leave your editor. You can use its chat interface to ask a question, and it will give you in-line suggestions, even in the command line. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Giving you Control:&lt;/strong&gt; Continue is open-source, which means you get to pick the &lt;a href="https://docs.continue.dev/customization/models#recommended-models" rel="noopener noreferrer"&gt;AI model&lt;/a&gt; (GPT-4, Claude, or even local models) and build custom agents that match your team’s coding style.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Where It Falls Short
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Setup Required:&lt;/strong&gt; Getting it set up requires some configuration before it feels truly seamless in your workflow. However, it has a vibrant community support readily available via &lt;a href="https://github.com/continuedev/continue/discussions" rel="noopener noreferrer"&gt;GitHub discussions&lt;/a&gt; and &lt;a href="https://discord.com/invite/NWtdYexhMs" rel="noopener noreferrer"&gt;Discord&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Learning Curve:&lt;/strong&gt; To unlock its full power, you'll need to spend a little time learning and configuring it to your specific needs.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Continue is Best Suited For
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Developers who want coding assistance from the comfort of their IDE or use the CLI for quick command-line support.&lt;/li&gt;
&lt;li&gt;Teams that want to avoid being trapped in a single proprietary platform. &lt;/li&gt;
&lt;li&gt;Teams that want to standardize their processes, share custom rules, best practices, and project-specific knowledge for consistency and efficiency.&lt;/li&gt;
&lt;li&gt;Developers who want to take control and customize their own coding environment and workflow.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Key Features of Continue
&lt;/h2&gt;

&lt;p&gt;Continue comes with key features that cover almost everything you’d expect from a coding assistance: Chat, autocomplete, edit, apply, embed, and rerank.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Chat&lt;/strong&gt;: The most basic but most versatile feature. I used it on a messy debugging session, it didn’t give a generic answer, but also pointed at the actual errors within the context of my project.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Autocomplete&lt;/strong&gt;: It uses project context to suggest the next snippet of code you’re about to write. You just hit Tab, and the boilerplate code writes itself. This feels especially helpful when writing repetitive tests or setting up configs across multiple files.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Edit&lt;/strong&gt;: You can select a chunk of code, type “refactor this,” and it will refine it into something clean and readable in seconds. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Apply&lt;/strong&gt;: You can trigger an action to generate unit tests for a given file or auto-document a function. Instead of chatting back and forth, actions are executed directly.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Embed&lt;/strong&gt;: It converts your code into a unique numerical representation, allowing the AI to perform a semantic search.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Rerank&lt;/strong&gt;: It improves search relevance by reordering initial results based on their true semantic meaning, ensuring the most useful information is always at the top.&lt;br&gt;
​​&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  So, Which One is Right for You?
&lt;/h2&gt;

&lt;p&gt;At the end of the day, these tools aren’t competitors, but they complement each other. The choice between a generalist like ChatGPT and Continue comes down to your priorities.&lt;/p&gt;

&lt;p&gt;If you're looking for a versatile tool for learning and getting high-level answers, ChatGPT is a fantastic resource. On the other hand, if your goal is to build, refactor, or debug code in a secure workflow all from within your editor or command line, then Continue is your best bet.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>opensource</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Beginners Guide Contributing To Apache APISIX</title>
      <dc:creator>Anita Ihuman  🌼</dc:creator>
      <pubDate>Mon, 25 Apr 2022 08:18:46 +0000</pubDate>
      <link>https://dev.to/apisix/beginners-guide-contributing-to-apache-apisix-boa</link>
      <guid>https://dev.to/apisix/beginners-guide-contributing-to-apache-apisix-boa</guid>
      <description>&lt;p&gt;Have you been looking for ways to get involved and actively contribute to Apache Software Foundation’s newest top-level project(Apache APISIX)? Then, this article is for you.&lt;/p&gt;

&lt;p&gt;Several &lt;a href="https://dev.to/anita_ihuman/demystifying-apache-apisix-the-ideal-microservices-gateway-5fo0"&gt;articles&lt;/a&gt; and &lt;a href="https://www.youtube.com/watch?v=-9-HZKK2ccI" rel="noopener noreferrer"&gt;videos&lt;/a&gt; already exist that explain the open-source tool Apache APISIX and what its all about. This article gives you a walkthrough on contributing and collaborating in the APISIX community.&lt;/p&gt;

&lt;h3&gt;
  
  
  Meet with the community
&lt;/h3&gt;

&lt;p&gt;There are several ways you can get in touch with the Apache APISIX community. To kickstart your contribution journey, jump into our community by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Subscribe to mailing&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;a href="https://apisix.apache.org/docs/general/join#subscribe-to-the-mailing-list" rel="noopener noreferrer"&gt;mailing list&lt;/a&gt; is a great place to start, as most of the discussions regarding the project are deliberated via the mailing list. Subscribing to this mailing is easy as pie. Follow the &lt;a href="https://apisix.apache.org/docs/general/join#subscribe-to-the-mailing-list" rel="noopener noreferrer"&gt;steps here&lt;/a&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Joining the slack&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The slack channel is another way to meet and collaborate with the APISIX community. Our slack community is a safe place to learn, educate and engage with the community. You can ask questions here, as the community managers and maintainers are always on standby. Follow &lt;a href="https://join.slack.com/t/the-asf/shared_invite/zt-vlfbf7ch-HkbNHiU_uDlcH_RvaHv9gQ" rel="noopener noreferrer"&gt;this link&lt;/a&gt; to join the slack.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Attending weekly meetings&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We also have a weekly call that goes on every Wednesday. Our community weekly focuses on improving the engagement and collaboration among community members. You will receive an invite to this meeting upon subscribing to the &lt;a href="https://apisix.apache.org/docs/general/join#subscribe-to-the-mailing-list" rel="noopener noreferrer"&gt;mailing list.&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  How can you get involved?
&lt;/h3&gt;

&lt;p&gt;You can identify with the Apache APISIX project and community in the following ways.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Through Internship programs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;APISIX has been actively participating in some internship programs since 2019, like Outreachy, &lt;a href="https://cwiki.apache.org/confluence/display/COMDEV/GSoC+2022+Ideas+list#GSoC2022Ideaslist-APISIX" rel="noopener noreferrer"&gt;GSOD&lt;/a&gt; and APISIX Summer of Programing. These programs serve as a medium to welcome more open-source enthusiasts into the community and projects as mentees. At the end of each internship period, mentees are fully equipped with the knowledge of Apache APISIX and how to contribute.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  As a contributor&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For most open source enthusiasts, you can join the Apache APISX community as a first-time contributor. There are different ways you can make valuable contributions. We will look at other ways you can contribute to the project and community shortly.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  As a user&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As a project owner, you can also get involved in our community by testing and integrating our project. APISIX serves as an excellent API gateway, delivering dynamic, real-time, high-performance &amp;amp; traffic management features. Apache APISIX consists of a data plane to dynamically control request traffic, a control plane to store and synchronise gateway data configuration, an AI plane to orchestrate plugins and real-time analysis and processing of request traffic.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  As an evangelist&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Apache APISIX, although a top-level project, is still in its early stages. You can get involved in our community by spreading the word about this cloud-native gateway project; through articles, videos, and conferences.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F98c20jrh4e4c4stlwxea.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F98c20jrh4e4c4stlwxea.jpeg" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  How to start your contributions
&lt;/h3&gt;

&lt;p&gt;Some open-source enthusiasts are interested in contributing to the Apache APISIX project but are unsure where to start. These are the different ways you can become an active contributor to the cloud-native project.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Help with translations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the hope of an inclusive project, the APISIX developers are working on translating the code and documents to a universal language. If this is an area you find interesting, get started by picking up an issue regards translating parts of the codebase to English. If you come across our documentation and code that requires translation, feel free to create an issue accordingly or discuss it on the mailing list.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Help with documentation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Our team of technical writers are currently &lt;a href="https://github.com/apache/apisix/issues/6461" rel="noopener noreferrer"&gt;restructuring the Apache APISIX documentation&lt;/a&gt;. That is to say, there are several ways you can put your writing skills to good use in APISIX. While reading through our documentation, if you encounter any issues that need to be addressed, you can create an issue accordingly or discuss it on the mailing list. You can also contribute by creating written &lt;a href="https://apisix.apache.org/blog/" rel="noopener noreferrer"&gt;articles&lt;/a&gt; that promote APISIX. They will be featured on the &lt;a href="https://medium.com/@ApacheAPISIX" rel="noopener noreferrer"&gt;APISIX blog&lt;/a&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Help with outreach and engagement.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While technical skills are most welcome, the Apache APISIX community also support contributors with soft skills. This includes advocacy, marketing, and moderating our community channels. An excellent place to start is to familiarise yourself with &lt;a href="https://apisix.apache.org/docs/general/join" rel="noopener noreferrer"&gt;our community&lt;/a&gt;, where you can ask questions and provide answers for others. You can also promote the Apache APISIX community, projects and content on your media platform.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Help report or fix a bug.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Apache APISIX projects are built on Lua, Go, and JavaScript as the core language. Creating issues based on bugs you noticed within our code base is an excellent way to contribute to the APISIX project. Several issues are yet to be assigned within the &lt;a href="https://github.com/apache/apisix/issues" rel="noopener noreferrer"&gt;Apache APISIX repositor&lt;/a&gt;y. You can also check out the other repositories under Apache APISIX. Remember to carefully read through the &lt;a href="https://github.com/apache/apisix/blob/master/CONTRIBUTING.md" rel="noopener noreferrer"&gt;contributing guide&lt;/a&gt; to contribute to the code base successfully.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3uijeujqi9147j9g09gp.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3uijeujqi9147j9g09gp.jpeg" width="800" height="532"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Help out with the design.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Contributors with design insight(UI/UX), your skills are welcomed too. APISIX has a &lt;a href="https://github.com/apache/apisix-dashboard" rel="noopener noreferrer"&gt;dashboard&lt;/a&gt; designed to make it easy for users to operate Apache APISIX through a frontend interface. You can flex your design skills by improving on the existing designs. Designs contributions are discussed among community members via mailing list before implementation.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Help test and assure quality.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As a developer, you can check the Apache APISIX project for bugs and makes lists of defects for our engineers to attend to. Your reviews will help us ensure quality throughout the delivery cycle of Apache APISIX.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Star Apache APISIX repositories&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Last but not least, remember to show appreciation to our developers and contributors by staring &lt;a href="https://github.com/apache/apisix/issues" rel="noopener noreferrer"&gt;our project&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvi9v0ylkb9yoxbocjj2r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvi9v0ylkb9yoxbocjj2r.png" width="800" height="108"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Conclusion
&lt;/h4&gt;

&lt;p&gt;As a first time contributor, breaking into the community might be overwhelming at first, so feel free to ask questions as much as you have. If you have any concerns that require immediate attention, use the mailing list to communicate with the team. Always adhere to our code of conduct. Then, remember to tag a friend along.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Apache APISIX Ingress Controller Over the K8s Native Ingress</title>
      <dc:creator>Anita Ihuman  🌼</dc:creator>
      <pubDate>Tue, 12 Apr 2022 14:10:37 +0000</pubDate>
      <link>https://dev.to/apisix/apache-apisix-ingress-controller-over-the-k8s-native-ingress-oi5</link>
      <guid>https://dev.to/apisix/apache-apisix-ingress-controller-over-the-k8s-native-ingress-oi5</guid>
      <description>&lt;h3&gt;
  
  
  What is Ingress?
&lt;/h3&gt;

&lt;p&gt;An ingress is an API object that allows access to your Kubernetes services from outside the Kubernetes cluster by defining the edge-entry traffic routing rules (e.g. load balancing, SSL termination, path-based routing, protocol). The ingress manages external access to the services within a cluster using public IP and port mapping.&lt;/p&gt;

&lt;p&gt;The component responsible for carrying out this action is called the Ingress controller.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftp1yr37pvhs5g3f89ln4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftp1yr37pvhs5g3f89ln4.png" width="800" height="841"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Role of K8s Ingress Controller in a Cluster?
&lt;/h3&gt;

&lt;p&gt;In a Kubernetes application, pods run inside a cluster while a load balancer runs outside. The load balancer is responsible for accepting connections from the internet and routing them to an edge proxy located within your cluster. This edge proxy is in charge of routing traffic into your pods. You can route multiple back-end services through a single IP address using ingress.&lt;/p&gt;

&lt;p&gt;An ingress controller is a third party edge-proxy that implements ingress. They are responsible for reading the Ingress Resource information and processing that data accordingly. The ingress controller does not replace the load balancer; instead, it adds a layer of routing and control behind the load balancer.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  It aids in the consolidation of routing rules into a single location.&lt;/li&gt;
&lt;li&gt;  It is declared, created, and destroyed independently from the services you wish to expose.&lt;/li&gt;
&lt;li&gt;  It is configured using the Kubernetes API to deploy objects called “Ingress Resources.”&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The Drawbacks
&lt;/h3&gt;

&lt;p&gt;Kubernetes provides several methods for exposing edge interfaces, including node port, load balancer, ingress, etc. Ingress is unquestionably a more cost-effective way to use reverse proxies as it reveals a limited number of public IPs. However, it is accompanied by specific risks like complexity, latency, and security once implemented. These issues are frequently intertwined; the others will likely be if one is present.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Complexity:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Microservice architectures are built with ​​lightweight and simple designs, and this is a feature most organizations look out for in their application. However, the K8s ingress does not stick to these principles, as it is developed mainly using complex scripts and functions. This results in complexities for the underlying engine it is implemented on and even more confuses the configurator. Kubernetes being an overcomplicated tool, having to implement other complex features might affect your application’s scalability and overall performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lack of support for stateful load balancing:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Kubernetes is an operation and maintenance-oriented non-business management system. As a result, complex load balancing solutions such as session persistence are not supported. It also does not support these contradicting load balancing solutions in the short run.&lt;/p&gt;

&lt;p&gt;Google has already attempted to address this issue with its service mesh solution (&lt;a href="https://istio.io/" rel="noopener noreferrer"&gt;istio&lt;/a&gt;). The istio architecture is flawless. However, it compromises performance, which can be remedied using its mixer v2.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dynamic adjustment of weights:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It is tricky to control traffic by percentage in Kubernetes, although this is one feature most organizations look out for. However, after v1.8, Kubernetes began supporting &lt;a href="https://en.wikipedia.org/wiki/IP_Virtual_Server" rel="noopener noreferrer"&gt;IPVS&lt;/a&gt; as a Kube-proxy startup parameter or Kube-route’s annotation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Weak scalability:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Although developers created ingress to tackle edge traffic, people’s desire for managing edge traffic is equal to or greater than their need for internal traffic.&lt;/p&gt;

&lt;p&gt;Business-level grayscale control, fusing, flow control, authentication, and flow control are the basic requirements popular on ingress. However, native ingress extension is limited in these aspects.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reload problem:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The YAML configuration file is analyzed by the ingress controller, transformed to Nginx.conf, and then triggered to reload Nginx.conf to make the setting take effect in the architecture of Kubernetes native ingress.&lt;/p&gt;

&lt;p&gt;Routine operation and maintenance inevitably touch the ingress YAML configuration occasionally. Every time the configuration takes effect, it will trigger a reload. It is unacceptable, primarily when edge traffic uses long connections, which are more likely to cause accidents.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Apache APISIX Ingress Controller?
&lt;/h3&gt;

&lt;p&gt;APISIX Ingress is another Ingress Controller implementation. APISIX Ingress differs from Kubernetes Ingress Nginx in that it uses Apache APISIX as the actual data plane for business traffic. Using Apache APISIX as an ingress implementation can help address the above drawbacks while also providing the community with an additional ingress controller option.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frfxcgbe2hfe8dr29b7sh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frfxcgbe2hfe8dr29b7sh.png" width="800" height="601"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tackles the complexities:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In Apache APISIX, we can write logic through plug-in code to expose a simple configuration interface, which is convenient for configuration maintenance and avoiding the interference of scripts to configuration personnel.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Support for stateful load balancing:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Apache APISIX natively supports load balancing such as session persistence and reserves the expansion capability of the balancer stage. Since Kubernetes permits project expansion, it is possible to enlarge the advanced load balancing requirements based totally on Apache APISIX.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dynamic in nature:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Apache APISIX abstracts main objects like route, service, consumer, upstream, plug-in, and so on, and operations like weight adjustment are naturally supported. Hence, changing the node weight under upstream.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ability to expand:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In terms of expansion capabilities, Apache APISIX supports plug-ins. In addition to the officially offered plug-ins, you can create custom plug-ins to satisfy your specific needs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hot reload:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Apache APISIX supports hot configuration, and routes can be defined and modified at any time without triggering Nginx reload. So whether you add, delete or alter plug-ins and update codes of plug-ins within the disk, you don’t need to restart the service.&lt;/p&gt;

&lt;h3&gt;
  
  
  Implementing APISIX Ingress Controller in AISpeech
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/company/al-speech-ltd" rel="noopener noreferrer"&gt;AI Speech&lt;/a&gt; is a leading speech technology platform. It specializes in computer speech recognition, tone analysis, and dialogue management techniques.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhls2143w1rivbpp0wa8v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhls2143w1rivbpp0wa8v.png" alt="The timing diagram" width="800" height="699"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To implement the Apache APISIX ingress controller, AISpeech considered two challenges that required immediate attention.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  To synchronize the state of the Kubernetes cluster and Apache APISIX.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;a href="https://github.com/apache/apisix-ingress-controller" rel="noopener noreferrer"&gt;Apache APISIX ingress controller&lt;/a&gt; project addresses this by syncing Kubernetes pod information with the upstream in APISIX. Simultaneously, the primary and backup systems are constructed to manage their high-availability issues.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  To define the objects in Apache APISIX in Kubernetes Custom Resource Definitions(&lt;a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/" rel="noopener noreferrer"&gt;CRD&lt;/a&gt;).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Apache APISIX ingress provides CRD support to understand declarative configurations better. State checking, however, ensures quick access to the synchronized state of declarative configurations. Because Kubernetes declares the cluster state using YAML, the CRD must be defined in Apache APISIX for the objects (route/service/upstream/plug-in) to integrate into Kubernetes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;There are several popular Ingress Controllers for Kubernetes, but choosing the right one for your application is always tricky. As your application scales, you need to consider an ingress controller that aligns with your Kubernetes application in testing and production mode. The &lt;a href="https://github.com/apache/apisix-ingress-controller" rel="noopener noreferrer"&gt;Apache APISIX ingress controller&lt;/a&gt; is an open-source project that answers all these needs.&lt;/p&gt;

</description>
      <category>microservices</category>
      <category>gateway</category>
      <category>ingress</category>
      <category>api</category>
    </item>
    <item>
      <title>HelloTalk: Leveraging Apache APISIX and OpenResty</title>
      <dc:creator>Anita Ihuman  🌼</dc:creator>
      <pubDate>Tue, 05 Apr 2022 09:48:03 +0000</pubDate>
      <link>https://dev.to/anita_ihuman/hellotalk-leveraging-apache-apisix-and-openresty-1alo</link>
      <guid>https://dev.to/anita_ihuman/hellotalk-leveraging-apache-apisix-and-openresty-1alo</guid>
      <description>&lt;p&gt;When it comes to offering reliable and high-performance services worldwide, most communication systems encounter several cross-border problems. This article uncovers how HelloTalk overcomes these challenges, emerging as the world’s largest social foreign language learning platform using OpenResty and Apache APISIX.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is HelloTalk?
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.hellotalk.com/?lang=en" rel="noopener noreferrer"&gt;HelloTalk&lt;/a&gt; is a language learning social platform that serves as a platform for diverse speakers of different cultures to teach and learn languages globally. There are over 160 languages that you can learn on HelloTalk. Some languages are: Arabic, Cantonese, English, French, Vietnamese, German, Italian, Japanese, Korean, Mandarin Chinese, Portuguese, Spanish, and more.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftk2a07ohytlzeg913nrl.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftk2a07ohytlzeg913nrl.jpeg" alt="HelloTalk" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Conversing, mistake correction and translation are all included in the program, and users may alter text by voice while chatting. HelloTalk supports voice to text and translation for over 100 languages. HelloTalk prioritizes both local and international communities of users by delivering high-quality services.&lt;/p&gt;

&lt;p&gt;Users can learn other native languages and cultures with the community’s help. It is also a fantastic way to meet new people! HelloTalk has over 16 million users who are free to connect with learning partners for the language they wish to learn. Instead of conventional classes, users learn from a community of learners. Furthermore, users can also practice what they’ve learned on smartphones or tablets.&lt;/p&gt;

&lt;h3&gt;
  
  
  Challenges of Internationalization
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  The HelloTalk users are widely spread. Therefore, It is necessary to find a solution to the problem of dispersed user distribution zones.&lt;/li&gt;
&lt;li&gt;  Around 20% of HelloTalk customers in China are encountering firewall issues.&lt;/li&gt;
&lt;li&gt;  The language environment in other nations is as complex as the network environment, making language adaptation a problematic task.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg2vx08eqdo3ut1pexvpo.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg2vx08eqdo3ut1pexvpo.jpeg" alt="Dispersed users" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As illustrated in the diagram above, HelloTalk users are distributed worldwide. The distribution of HelloTalk users in Asia-Pacific, like South Korea, Japan, the Yangtze River Delta, and China’s Pearl River Delta, is significantly concentrated, making it easy to manage. However, delivering reliable service is difficult in other areas where customers are extensively dispersed, such as Europe, Africa, and the Middle East.&lt;/p&gt;

&lt;h3&gt;
  
  
  How To Improve User’s Global Access Quality
&lt;/h3&gt;

&lt;p&gt;When thinking of improving the global user experience, HelloTalk considers two issues: cost and authentic quality of service. International customers are directly accelerated back to Alibaba in Hong Kong. Although direct acceleration risks degrading the client’s network quality due to geographical issues, HelloTalk implemented failover measures to improve user experience.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Access Line Control and Traffic Management: The stability brought by the dedicated line network, such as Europe to Hong Kong, delay: 244 ms -&amp;gt; 150 ms;&lt;/li&gt;
&lt;li&gt;  Dynamic upstream control (Lua&lt;a href="https://openresty.org/en/lua-resty-upstream-healthcheck-library.html" rel="noopener noreferrer"&gt;-resty-healthcheck&lt;/a&gt;); flexible switching between several service provider lines is considered to ensure service reliability.&lt;/li&gt;
&lt;li&gt;  Part of the logic can be processed directly at the edge, serverless (the principle is reliant on pcall + loadstring implementation), by transforming serverless into &lt;a href="https://apisix.apache.org/" rel="noopener noreferrer"&gt;Apache APISIX + ETCD&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;  Access Node and Quality Control&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Due to how dispersed the access nodes of HelloTalk are, the United States can’t go directly to Hong Kong. Instead, the established mechanism only transfers to Germany before Hong Kong. So, to ensure that the user experience is not affected, Hellotalk considers specific failover measures to ensure that users can move between multiple service providers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Choice of 7-layer and 4-layer Acceleration:&lt;/strong&gt; Many service providers will provide 7-layer acceleration and 4-layer acceleration, but some problems need to be solved.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Layer 4 acceleration:&lt;/strong&gt; The SSL handshake time is too long and easy to fail, so it cannot obtain the client’s IP. Hence, making it inconvenient to the node quality statistics.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Layer 7 acceleration:&lt;/strong&gt; IM(Instance Message) services cannot guarantee the reliable arrival of messages that require a long connection&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;A management solution for global access in a multi-cloud environment:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Layer 7 acceleration with support for WebSockets. (cloud service + self-built).&lt;/li&gt;
&lt;li&gt;  Self-built low-speed VPC + dedicated line channel. (Considering the cost-effectiveness, IM itself does not have much traffic, so it only sends notification messages)&lt;/li&gt;
&lt;li&gt;  Mixed sending and receiving messages with long and short connections: WebSocket + long polling + httpdns + built-in IP failover mechanism.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Why OpenResty?
&lt;/h3&gt;

&lt;p&gt;OpenResty is a programming language that can create scalable online applications, web services, and dynamic web gateways. The OpenResty architecture is built on standard Nginx core and a variety of 3rd-party Nginx modules that have been enhanced to turn Nginx into a web app server capable of handling a vast number of requests.&lt;/p&gt;

&lt;p&gt;HelloTalk’s IM services were once written in C++, and at that time, it used a high-performance network framework of a large manufacturing unit. Protocols were all drawn up internally, resulting in the seldom use of HTTP protocol; this didn’t seem cost-efficient for a smaller company.&lt;/p&gt;

&lt;p&gt;Suppose the internal writing service is to be made available to the public. In that case, one must create a Proxy Server separately and the newly added command word adapted, which is time-consuming.&lt;/p&gt;

&lt;p&gt;Hence, OpenResty was adopted to act as a proxy and directly convert the protocol to internal services. In the end, it reduced costs. The services were made available to the public, necessitating the use of &lt;a href="https://www.cloudflare.com/en-gb/learning/ddos/glossary/web-application-firewall-waf/" rel="noopener noreferrer"&gt;WAF&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Previously, HelloTalk APIs were built using PHP; this left several loopholes due to framework issues resulting in countless cyber-attacks from hackers. The most common way was to post various PHP keywords or include PHP keys within URL characters.&lt;/p&gt;

&lt;p&gt;HelloTalk tackled this challenge by adding tiny bits of code (based on regularisation) in OpenResty; it turned out that adding the WAF(Web application firewall) function did not affect performance loss.&lt;/p&gt;

&lt;p&gt;Integrating OpenResty helped to tackle some of these challenges with the following benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Quickly implement a Web IM: Most mainstream IM(Instance Message) apps like WhatsApp and WeChat offer Web IM; hence the WebIM version of HelloTalk was implemented from the server-side based on OpenResty’s compatibility and modification of protocols. This was possible by simply adding a layer of OpenResty in the middle for WebSocket protocol conversion.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftfz6brcapz9c8brydtd1.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftfz6brcapz9c8brydtd1.jpeg" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  By putting PHP function names in the WAF, all attempts of cyber attacks are blocked and unable to reach PHP.&lt;/li&gt;
&lt;li&gt;  With openRestry, message limiting became possible, controlling the frequency of API calls/messages, which occurs directly based on &lt;em&gt;resty.limit.req.&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Integrating Apache APISIX
&lt;/h3&gt;

&lt;h4&gt;
  
  
  What is Apache APISIX
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://apisix.apache.org/" rel="noopener noreferrer"&gt;Apache APISIX&lt;/a&gt; is a dynamic, real-time, high-performance API gateway. It offers traffic management features such as load balancing, dynamic upstream, canary release, service meltdown, authentication, observability, and other rich traffic management features. It’s built atop the &lt;a href="https://nginx.org/" rel="noopener noreferrer"&gt;Nginx reverse proxy server&lt;/a&gt; and the key-value store &lt;a href="https://etcd.io/" rel="noopener noreferrer"&gt;etcd&lt;/a&gt;, to provide a lightweight gateway.&lt;/p&gt;

&lt;p&gt;In the early days, to achieve dynamic modification for HelloTalk, Nginx.conf was altered directly to render the best performance, just as an API gateway would.&lt;/p&gt;

&lt;p&gt;This proved a possible solution to achieving the high performance, dynamic update of SSL certificate, Location and upstream. Similar to the K8S ingress update mechanism, generated through a template: nginx_template.conf + JSON -&amp;gt; PHP -&amp;gt; Nginx.conf -&amp;gt; PHP -CLI -&amp;gt; Reload. The challenge with using this method was that many individuals forgot the priority order rules established by Location, often resulting in mistakes.&lt;/p&gt;

&lt;p&gt;The adoption of Apache APISIX as the API gateway for HelloTalk reduced these challenges significantly with these benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  APISIX is established on the Nginx library and etcd, saving the maintenance cost.&lt;/li&gt;
&lt;li&gt;  The code is straightforward to comprehend, and it may understand regardless of skill set.&lt;/li&gt;
&lt;li&gt;  It offers rich traffic management features such as load balancing, dynamic upstream, canary release, circuit breaking, authentication, observability, etc.&lt;/li&gt;
&lt;li&gt;  It is an actively maintained project, and contributors/maintainers are prompt to render support via mailing lists.&lt;/li&gt;
&lt;li&gt;  APISIX is relatively simple and easy to configure; although it is cumbersome, it does not rely on RDMS(Relational Database Management System), incurring additional maintenance costs.&lt;/li&gt;
&lt;li&gt;  Apache APISIX is based on &lt;a href="https://github.com/starwing/lua-protobuf" rel="noopener noreferrer"&gt;lua-protobuf&lt;/a&gt;. Hence, HelloTalk switched to using the lua-protobuf library, which can directly convert a PB object into JSON, making it convenient.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;This post explains how HelloTalk previously devised alternative ways of achieving reliable and high-performance services globally. I also discussed the challenges with these configurations and the success story of adopting Apache APISIX and OpenResty.&lt;/p&gt;

&lt;p&gt;Consider going through the &lt;a href="https://apisix.apache.org/docs/apisix/getting-started" rel="noopener noreferrer"&gt;docs&lt;/a&gt; for an in-depth understanding of Apache APISIX and its features.&lt;/p&gt;

&lt;p&gt;You can find the original recording from HelloTalk on this topic &lt;a href="https://api7.ai/usercase/hellotalk" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;br&gt;
&lt;em&gt;Originally published on &lt;a href="https://medium.com/@Anita-ihuman/hellotalk-leveraging-apache-apisix-and-openresty-f2b9c611ef06" rel="noopener noreferrer"&gt;Medium &lt;/a&gt;on April 5th, 2022.&lt;/em&gt;&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
