DEV Community

Shubham Gupta
Shubham Gupta

Posted on • Originally published at designdoc.tech

Your First AI App Will Be Spaghetti (And That's Okay)

A Story in Three Acts 🎭

Act 1: You discover the OpenAI API. You're drunk with power. "I can build Jarvis!" you scream into the void. You build a chatbot in 20 lines.

Act 2: Your PM asks for "just a few more features." You add them. Then more. Then you add "PDF support" which is just regex hoping for the best.

Act 3: You're staring at 2,000 lines of spaghetti, the context window is overflowing, the AI is hallucinating company policies that involve free pizza, and you've forgotten what happiness feels like.

This is Fine
(A live look at your server logs)

This is the journey of every developer who touches LLMs. I'm here to tell you: it's not your fault, and there's a way out.

The Innocent Beginning

Here's how it starts. Twenty lines of beautiful, naive code:

// The honeymoon phase
import OpenAI from 'openai';

const openai = new OpenAI();

async function askAI(question: string) {
  const response = await openai.chat.completions.create({
    model: 'gpt-4',
    messages: [
      { role: 'system', content: 'You are a helpful assistant.' }, // Minimalist art
      { role: 'user', content: question }
    ]
  });
  return response.choices[0].message.content;
}

// It works! Ship it!
console.log(await askAI("What's the weather like?"));
Enter fullscreen mode Exit fullscreen mode

You show your PM. They're impressed. You're a genius. Life is good. Ideally, you should stop here and retire.

The Feature Creep 🧟

Then the requests come:

  • "Can it remember that I like cats?"
  • "Can it access our customer database (password: hunter2)?"
  • "Can it book meetings?"
  • "Can it fix my marriage?"

And you, the naive optimist, say "Sure!"

// Three weeks later... (Viewer discretion advised)
async function askAI(question: string, userId: string) {
  // Get conversation history (Loading... loading...)
  const history = await db.getConversationHistory(userId);

  // Get user context (All of it. Just in case.)
  const user = await db.getUser(userId);
  const recentOrders = await db.getRecentOrders(userId); 
  const tickets = await supportSystem.getOpenTickets(userId); // Why do we need tickets? Who knows!

  // Build the mega-prompt from hell
  const systemPrompt = `
    You are a helpful assistant for ${COMPANY_NAME}.
    Current user: ${user.name} (${user.tier} tier)
    Recent orders: ${JSON.stringify(recentOrders)}
    Open tickets: ${JSON.stringify(tickets)}

    Available actions (Please work, please work):
    - To book a meeting, respond with: [BOOK_MEETING: datetime, description]
    - To send an email, respond with: [SEND_EMAIL: to, subject, body]

    Brand voice guidelines:
    ${BRAND_VOICE_DOCUMENT} // <- Goodbye, token budget

    Remember: Never mention competitors. Always be helpful. Be funny but not too funny.
  `;

  // ... (API Call) ...

  // Parse the response for actions using reliable technology: REGEX
  if (content.includes('[BOOK_MEETING:')) {
    // 60% of the time, it works every time
    const match = content.match(/\[BOOK_MEETING: (.*?), (.*?)\]/);
    if (match) {
        // ...
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

The Problems Multiply

This code "works," but you're now dealing with:

1. Context Window Explosion πŸ’₯

Your system prompt is 3,000 tokens. User history is 2,000. Customer data is 1,000. You're spending $5 per question to ask "Hi".

2. Fragile Action Parsing 🍝

You're using regex to parse natural language. The model writes [BOOK MEETING] without the underscore and your app crashes.

3. Hallucinated Data πŸ‘»

The model confidently tells users about orders that don't exist because it's completing the pattern. "Your order of 500 Rubber Ducks is on the way!" (User ordered 1 pen).

The Way Out: Structured Sanity

Here's the good news: these problems have solutions. Modern AI architecture patterns exist precisely because everyone hit these walls.

The key principles:

  1. Structured Outputs β†’ JSON schemas, not free-form text.
  2. Tool/Function Calling β†’ Give the model APIs, don't make it guess.
  3. Context Management β†’ Load context on-demand (RAG).
  4. Separation of Concerns β†’ Enter MCP.

A Glimpse of the Clean Version πŸ›

Here's what the same feature set looks like with proper architecture:

// With MCP-style architecture
const agent = new Agent({
  model: 'gpt-4',
  tools: [
    bookingTool,      // Handles its own validation
    emailTool,        // Handles its own auth
  ],
  context: dynamicContextLoader(userId),  // Loads what's needed
});

const response = await agent.run(question);
// That's it. Go home.
Enter fullscreen mode Exit fullscreen mode

Next up: "MCP: The Secret Sauce (That Isn't Ranch) for AI Apps" β†’ where we finally learn the architecture that fixes all of this.

Top comments (0)