DEV Community

Apollo
Apollo

Posted on

How I Built a Full AI Coding Assistant in One Weekend

How I Built a Full AI Coding Assistant in One Weekend

Last weekend, I decided to build a fully functional AI coding assistant using OpenAI's GPT-4. What started as a fun experiment turned into a deeply educational journey into prompt engineering, system prompts, and context window strategies. Here’s how I did it, the lessons I learned, and some practical examples to help you build your own.


The Goal

I wanted an AI coding assistant that could:

  1. Understand natural language queries about code.
  2. Generate, debug, and optimize code across multiple languages.
  3. Maintain context across interactions to handle complex tasks.
  4. Provide explanations and reasoning for its outputs.

To achieve this, I broke the problem into three core components: prompt engineering, system prompts, and context window management.


Step 1: Prompt Engineering Patterns

Prompt engineering is the art of crafting input queries to get the most relevant and accurate responses from an AI model. I experimented with several patterns and settled on a few that worked exceptionally well.

Chain-of-Thought Prompting

This technique encourages the AI to break down complex problems into smaller, logical steps. For example, instead of asking, "Write a Python function to reverse a string," I asked:

"Let’s break this down step by step. First, define a function that takes a string as input. Second, reverse the string using slicing. Finally, return the reversed string."

The AI produced this output:

def reverse_string(input_string):
    return input_string[::-1]
Enter fullscreen mode Exit fullscreen mode

This approach ensures the AI understands the problem structure and produces more reliable outputs.

Instructional Prompting

I used instructional prompts to guide the AI’s behavior explicitly. For example, I included phrases like:

"You are a senior Python developer with expertise in optimization. Write clean, efficient code and explain each line."

This resulted in outputs like:

def factorial(n):
    # Base case: factorial of 0 or 1 is 1
    if n in {0, 1}:
        return 1
    # Recursive case: multiply n by factorial of n-1
    return n * factorial(n - 1)
Enter fullscreen mode Exit fullscreen mode

The AI not only generated the code but also added comments and explanations, making it easier to understand.

Error Handling Prompts

To make the assistant robust, I taught it to handle edge cases and errors. For instance:

"Write a function to divide two numbers. Handle cases where the denominator is zero and return an appropriate error message."

The AI responded with:

def divide(a, b):
    if b == 0:
        raise ValueError("Denominator cannot be zero.")
    return a / b
Enter fullscreen mode Exit fullscreen mode

Step 2: Crafting System Prompts

System prompts define the AI’s behavior and personality. I used this to turn GPT-4 into a professional coding assistant. Here’s an example of a system prompt I used:

"You are CodeMate, an expert AI coding assistant. Your role is to help developers write, debug, and optimize code. Always respond in a professional tone, provide detailed explanations, and ensure code is clean and efficient. If a request is ambiguous, ask clarifying questions before proceeding."

This system prompt set the tone for all interactions and ensured the AI stayed focused on its role.


Step 3: Context Window Strategy

The context window is the memory of the AI—the amount of text it can remember in a single conversation. GPT-4’s context window is 8,192 tokens, which is about 6,000 words. To maximize this, I implemented several strategies:

Summarization

Instead of letting the context grow indefinitely, I periodically summarized the conversation. For example, after debugging a piece of code, I asked the AI:

"Summarize the key takeaways from this debugging session."

This reduced the token count and kept the context relevant.

Chunking

For larger tasks, I broke them into smaller chunks. Instead of asking the AI to write an entire application in one go, I structured the conversation like this:

  1. "Let’s design the database schema for a blog application."
  2. "Now, write the API endpoints for creating and retrieving posts."
  3. "Finally, implement the frontend interface to display posts."

This approach ensured the AI stayed within the context window while handling complex tasks.

Context Reset

When the conversation became too cluttered, I reset the context by summarizing previous interactions and starting fresh. For example:

"Let’s summarize what we’ve done so far. We’ve built a Python script to parse CSV files and calculate averages. Now, let’s move on to optimizing the script for large files."


Lessons Learned

  1. Clarity is Key: Well-crafted prompts yield better results. Be specific, provide context, and guide the AI step by step.
  2. System Prompts Matter: A good system prompt sets the tone and ensures the AI stays on task.
  3. Context Management is Crucial: Without proper context management, conversations can become cluttered and irrelevant. Summarization and chunking are your best friends.
  4. Iterate and Experiment: Building the assistant took multiple iterations. Don’t be afraid to tweak prompts and strategies.

Practical Example: Building a Simple Web Scraper

Here’s how I used the assistant to build a Python web scraper:

My Prompt:

"Write a Python script to scrape the titles of posts from a blog website. Use the requests and BeautifulSoup libraries."

AI Response:

import requests
from bs4 import BeautifulSoup

def scrape_titles(url):
    response = requests.get(url)
    soup = BeautifulSoup(response.text, 'html.parser')
    titles = soup.find_all('h2', class_='post-title')
    return [title.text for title in titles]

scrape_titles('https://example-blog.com')
Enter fullscreen mode Exit fullscreen mode

I then asked the AI to add error handling and optimize the code:

My Prompt:

"Add error handling for invalid URLs and optimize the code to be more efficient."

AI Response:

import requests
from bs4 import BeautifulSoup

def scrape_titles(url):
    try:
        response = requests.get(url, timeout=10)
        response.raise_for_status()  # Raise an error for bad status codes
        soup = BeautifulSoup(response.text, 'html.parser')
        titles = soup.find_all('h2', class_='post-title')
        return [title.text for title in titles]
    except requests.RequestException as e:
        print(f"Error fetching URL: {e}")
        return []

scrape_titles('https://example-blog.com')
Enter fullscreen mode Exit fullscreen mode

Conclusion

Building an AI coding assistant in a weekend was both challenging and rewarding. By mastering prompt engineering, crafting effective system prompts, and managing the context window, I created a tool that’s now an integral part of my development workflow. Whether you’re debugging, optimizing, or brainstorming, an AI assistant can significantly boost your productivity. Give it a try—you might be surprised at how much you can accomplish in just a weekend.


⚡ Want the Full Prompt Library?

I compiled all of these patterns (plus 40+ more) into the Senior React Developer AI Cookbook — $19, instant download. Covers Server Actions, hydration debugging, component architecture, and real production prompts.

Browse all developer tools at apolloagmanager.github.io/apollo-ai-store

Top comments (0)