DEV Community

Cover image for "I Added AI to My Project in 2 Hours — Here's the Exact Code"
Devraj Singh
Devraj Singh

Posted on

"I Added AI to My Project in 2 Hours — Here's the Exact Code"

"Every developer is talking about AI. Most are scared to actually build with it. Here's the exact code that removes that fear — copy, paste, ship."

Okay I need to tell you something. 👇

6 months ago I thought building AI features was for "senior developers."

Like, you needed to understand machine learning, neural networks, matrix math, GPU clusters — all that stuff. 🤯

Then one evening, out of frustration, I just... tried.

2 hours later I had a working AI feature in my project. 😳

Deployed. Live. Actually impressive in interviews.

The code wasn't complicated. The concepts weren't scary. I had just been intimidated by something that turned out to be — honestly — easier than building a decent form with validation. 😂

This post is everything I learned in those 2 hours. The exact code. The exact mistakes I made. The exact moment it clicked.

No ML degree required. No GPU needed. Just JavaScript and a free API key. 🚀

Let's go. 👇


🤔 What "Adding AI" Actually Means (It's Not What You Think)

First — let's kill the biggest myth. 💀

You are NOT training a model. You are NOT doing machine learning. You are NOT touching neural networks.

What you ARE doing — calling an API. That's it. 🎯

What people think "AI development" is:
😱 Training massive models on GPUs
😱 Understanding backpropagation
😱 Writing Python ML code
😱 Needing a PhD

What it actually is for 99% of dev work:
✅ fetch() to an AI API endpoint
✅ Send text in, get text back
✅ Same as calling a weather API
✅ Learnable in 2 hours 😄
Enter fullscreen mode Exit fullscreen mode

The AI companies (OpenAI, Anthropic, Google) have already trained the models. They expose them as APIs. You just call the API. That's the whole thing. 🎉

Now let's build. 👇


🛠️ Setup — 10 Minutes

Step 1: Get your API key 🔑

Go to platform.openai.com → Sign up → API Keys → Create new key

Free tier gives you enough credits to build and demo. You won't pay anything while learning. 🆓

Step 2: Install the SDK

npm install openai
# that's it. one package. 📦
Enter fullscreen mode Exit fullscreen mode

Step 3: Set up your environment

# .env.local (Next.js) or .env (React)
OPENAI_API_KEY=sk-your-key-here

# ⚠️ CRITICAL: Add to .gitignore immediately
# Never commit API keys to GitHub — ever 🔒
Enter fullscreen mode Exit fullscreen mode
# .gitignore — make sure this is there
.env.local
.env
.env.production
Enter fullscreen mode Exit fullscreen mode

💡 Pro tip: Create a separate .env.example file with fake values — shows other devs what variables they need without exposing your real key. Professional habit. 🧑‍💻

That's the setup. 10 minutes. Now let's build something real. 👇


🔥 Project 1: AI Text Explainer (30 Minutes)

What it does: User pastes any confusing text — documentation, research paper, legal jargon — AI explains it simply. 📚

This was my first AI feature. Still one of the most useful. Let's build it step by step.

Step 1 — The API Route (Next.js)

// app/api/explain/route.js

import OpenAI from 'openai'

const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY  // server-side only 🔒
})

export async function POST(request) {
  try {
    const { text, level } = await request.json()

    // Basic validation — always do this ✅
    if (!text || text.trim().length === 0) {
      return Response.json(
        { error: 'No text provided' }, 
        { status: 400 }
      )
    }

    if (text.length > 5000) {
      return Response.json(
        { error: 'Text too long. Max 5000 characters.' },
        { status: 400 }
      )
    }

    // The AI call 🤖
    const completion = await openai.chat.completions.create({
      model: 'gpt-4o-mini', // cheaper, still great for this task 💰
      messages: [
        {
          role: 'system',
          content: `You are an expert at explaining complex things simply.
                    Explain at this level: ${level || 'beginner'}.
                    Use simple words. Use examples. Be concise.
                    Format with clear paragraphs. No bullet points.`
        },
        {
          role: 'user',
          content: `Explain this simply:\n\n${text}`
        }
      ],
      max_tokens: 500,
      temperature: 0.7  // 0 = robotic, 1 = creative, 0.7 = balanced 🎚️
    })

    const explanation = completion.choices[0].message.content

    return Response.json({ explanation })

  } catch (error) {
    console.error('AI API error:', error)
    return Response.json(
      { error: 'Something went wrong. Try again.' },
      { status: 500 }
    )
  }
}
Enter fullscreen mode Exit fullscreen mode

Step 2 — The Frontend Component

// components/TextExplainer.tsx
'use client'

import { useState } from 'react'

export default function TextExplainer() {
  const [input, setInput]           = useState('')
  const [explanation, setExplanation] = useState('')
  const [loading, setLoading]       = useState(false)
  const [error, setError]           = useState('')

  const handleExplain = async () => {
    if (!input.trim()) return

    setLoading(true)
    setError('')
    setExplanation('')

    try {
      const res = await fetch('/api/explain', {
        method: 'POST',
        headers: { 'Content-Type': 'application/json' },
        body: JSON.stringify({ text: input, level: 'beginner' })
      })

      const data = await res.json()

      if (!res.ok) {
        setError(data.error || 'Something went wrong')
        return
      }

      setExplanation(data.explanation)

    } catch (err) {
      setError('Network error. Check your connection.')
    } finally {
      setLoading(false) // always runs — success or fail ✅
    }
  }

  return (
    <div className="max-w-2xl mx-auto p-6 space-y-4">
      <h1 className="text-2xl font-bold">🧠 AI Text Explainer</h1>

      <textarea
        value={input}
        onChange={(e) => setInput(e.target.value)}
        placeholder="Paste any confusing text here..."
        className="w-full h-40 p-3 border rounded-lg resize-none"
        maxLength={5000}
      />

      <div className="flex justify-between items-center">
        <span className="text-sm text-gray-400">
          {input.length}/5000 characters
        </span>
        <button
          onClick={handleExplain}
          disabled={loading || !input.trim()}
          className="px-6 py-2 bg-blue-600 text-white rounded-lg 
                     disabled:opacity-50 disabled:cursor-not-allowed
                     hover:bg-blue-700 transition-colors"
        >
          {loading ? '🤔 Thinking...' : '✨ Explain This'}
        </button>
      </div>

      {/* Error state */}
      {error && (
        <div className="p-4 bg-red-50 border border-red-200 rounded-lg text-red-700">{error}
        </div>
      )}

      {/* Result */}
      {explanation && (
        <div className="p-4 bg-blue-50 border border-blue-200 rounded-lg">
          <h2 className="font-semibold mb-2">✅ Here's the simple version:</h2>
          <p className="text-gray-700 leading-relaxed">{explanation}</p>
        </div>
      )}
    </div>
  )
}
Enter fullscreen mode Exit fullscreen mode

That's Project 1. Done. ✅ Deploy it. Live link. Resume. 🚀


🔥 Project 2: Streaming AI Response (The ChatGPT Effect) ⚡

Why streaming matters: Without streaming, user waits 5-8 seconds staring at a spinner. With streaming, words appear instantly — like magic. Like ChatGPT. 🎩

This is the feature that makes interviewers say "oh wow." Let's build it.

The Streaming API Route

// app/api/stream/route.js

import OpenAI from 'openai'

const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY })

export async function POST(request) {
  const { prompt } = await request.json()

  // Create a streaming completion 🌊
  const stream = await openai.chat.completions.create({
    model: 'gpt-4o-mini',
    stream: true,  // ← this one word changes everything ✨
    messages: [
      { role: 'system', content: 'You are a helpful assistant.' },
      { role: 'user', content: prompt }
    ]
  })

  // Stream the response back chunk by chunk 📦
  const readableStream = new ReadableStream({
    async start(controller) {
      const encoder = new TextEncoder()

      for await (const chunk of stream) {
        const text = chunk.choices[0]?.delta?.content || ''
        if (text) {
          controller.enqueue(encoder.encode(text))
        }
      }
      controller.close()
    }
  })

  return new Response(readableStream, {
    headers: {
      'Content-Type': 'text/plain; charset=utf-8',
      'Transfer-Encoding': 'chunked',
    }
  })
}
Enter fullscreen mode Exit fullscreen mode

The Streaming Frontend

// The streaming magic on the client side ✨
'use client'

import { useState } from 'react'

export default function StreamingChat() {
  const [prompt, setPrompt]   = useState('')
  const [response, setResponse] = useState('')
  const [loading, setLoading] = useState(false)

  const handleStream = async () => {
    setLoading(true)
    setResponse('') // clear previous

    const res = await fetch('/api/stream', {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({ prompt })
    })

    // Read the stream chunk by chunk 🌊
    const reader = res.body.getReader()
    const decoder = new TextDecoder()

    while (true) {
      const { done, value } = await reader.read()
      if (done) break

      const text = decoder.decode(value)
      setResponse(prev => prev + text) // append each chunk 🎯
    }

    setLoading(false)
  }

  return (
    <div className="max-w-2xl mx-auto p-6 space-y-4">
      <h1 className="text-2xl font-bold">⚡ Streaming AI</h1>

      <textarea
        value={prompt}
        onChange={e => setPrompt(e.target.value)}
        placeholder="Ask anything..."
        className="w-full h-32 p-3 border rounded-lg"
      />

      <button
        onClick={handleStream}
        disabled={loading || !prompt.trim()}
        className="w-full py-3 bg-purple-600 text-white rounded-lg
                   disabled:opacity-50 hover:bg-purple-700"
      >
        {loading ? '⚡ Streaming...' : '🚀 Generate'}
      </button>

      {/* Response appears word by word — like ChatGPT! 🤩 */}
      {response && (
        <div className="p-4 bg-gray-50 rounded-lg whitespace-pre-wrap
                        font-mono text-sm leading-relaxed">
          {response}
          {loading && <span className="animate-pulse"></span>}
        </div>
      )}
    </div>
  )
}
Enter fullscreen mode Exit fullscreen mode

💡 The cursor trick: That with animate-pulse makes it look EXACTLY like ChatGPT. One character. Maximum impression. 😄


🔥 Project 3: AI with Memory (Multi-turn Chat) 🧠

The upgrade: The previous projects are stateless — every request is fresh. Real chat apps remember the conversation. Here's how. 👇

// The key insight — send the full conversation history each time 📚
'use client'

import { useState } from 'react'

// Message type
type Message = {
  role: 'user' | 'assistant'
  content: string
}

export default function ChatWithMemory() {
  const [messages, setMessages] = useState<Message[]>([])
  const [input, setInput]       = useState('')
  const [loading, setLoading]   = useState(false)

  const sendMessage = async () => {
    if (!input.trim()) return

    const userMessage: Message = { role: 'user', content: input }
    const updatedMessages = [...messages, userMessage]

    setMessages(updatedMessages)
    setInput('')
    setLoading(true)

    try {
      const res = await fetch('/api/chat', {
        method: 'POST',
        headers: { 'Content-Type': 'application/json' },
        body: JSON.stringify({ messages: updatedMessages }) // send ALL history 🗂️
      })

      const data = await res.json()
      const aiMessage: Message = { 
        role: 'assistant', 
        content: data.reply 
      }

      setMessages(prev => [...prev, aiMessage])
    } finally {
      setLoading(false)
    }
  }

  // Handle Enter key 🎹
  const handleKeyDown = (e: React.KeyboardEvent) => {
    if (e.key === 'Enter' && !e.shiftKey) {
      e.preventDefault()
      sendMessage()
    }
  }

  return (
    <div className="max-w-2xl mx-auto p-6 flex flex-col h-screen">
      <h1 className="text-2xl font-bold mb-4">🧠 AI Chat with Memory</h1>

      {/* Message history */}
      <div className="flex-1 overflow-y-auto space-y-3 mb-4">
        {messages.map((msg, i) => (
          <div
            key={i}
            className={`p-3 rounded-lg max-w-[80%] ${
              msg.role === 'user'
                ? 'bg-blue-600 text-white ml-auto'      // right side 👤
                : 'bg-gray-100 text-gray-800 mr-auto'   // left side 🤖
            }`}
          >
            {msg.content}
          </div>
        ))}
        {loading && (
          <div className="bg-gray-100 p-3 rounded-lg max-w-[80%] mr-auto">
            <span className="animate-pulse">🤖 thinking...</span>
          </div>
        )}
      </div>

      {/* Input area */}
      <div className="flex gap-2">
        <textarea
          value={input}
          onChange={e => setInput(e.target.value)}
          onKeyDown={handleKeyDown}
          placeholder="Type a message... (Enter to send)"
          className="flex-1 p-3 border rounded-lg resize-none h-14"
          rows={1}
        />
        <button
          onClick={sendMessage}
          disabled={loading || !input.trim()}
          className="px-4 bg-blue-600 text-white rounded-lg
                     disabled:opacity-50 hover:bg-blue-700"
        >
          Send 🚀
        </button>
      </div>
    </div>
  )
}
Enter fullscreen mode Exit fullscreen mode
// app/api/chat/route.js — handles full conversation history
export async function POST(request) {
  const { messages } = await request.json()

  const completion = await openai.chat.completions.create({
    model: 'gpt-4o-mini',
    messages: [
      {
        role: 'system',
        content: 'You are a helpful, friendly assistant. Keep responses concise.'
      },
      ...messages  // ← spread the full history here 🗂️
    ],
    max_tokens: 500
  })

  return Response.json({
    reply: completion.choices[0].message.content
  })
}
Enter fullscreen mode Exit fullscreen mode

💡 The memory trick: There's no actual memory on the server. You just send the entire conversation history with each request. The AI reads it all and responds in context. Simple. Powerful. 🧠


⚠️ The Mistakes I Made (Learn From Mine, Not Yours)

Mistake 1 — Exposed API Key on Frontend 🚨

// ❌ NEVER DO THIS — exposes key to everyone
const openai = new OpenAI({
  apiKey: 'sk-real-key-here',  // visible in browser devtools 😱
  dangerouslyAllowBrowser: true
})

// ✅ Always call AI from API routes (server-side) 🔒
// Key stays on server. Users never see it.
Enter fullscreen mode Exit fullscreen mode

Mistake 2 — No Rate Limiting 💸

// ❌ Without rate limiting — one angry user can drain your credits fast
// ✅ Simple rate limiting with upstash or even just in-memory:

const requestCounts = new Map()

export async function POST(request) {
  const ip = request.headers.get('x-forwarded-for') || 'unknown'
  const count = requestCounts.get(ip) || 0

  if (count > 10) { // max 10 requests per session
    return Response.json(
      { error: 'Too many requests. Slow down! 😅' },
      { status: 429 }
    )
  }

  requestCounts.set(ip, count + 1)
  // ... rest of handler
}
Enter fullscreen mode Exit fullscreen mode

Mistake 3 — No Loading State 😬

// ❌ Without loading state — user clicks button, nothing happens, clicks again
// Result: 5 duplicate API calls 💸

// ✅ Always disable button during loading
<button
  onClick={handleSubmit}
  disabled={loading}  // ← this one prop saves you money 💰
>
  {loading ? 'Thinking...' : 'Submit'}
</button>
Enter fullscreen mode Exit fullscreen mode

Mistake 4 — Vague System Prompts 🤷

// ❌ Vague — AI doesn't know what you want
{ role: 'system', content: 'Be helpful.' }

// ✅ Specific — AI knows exactly what to do
{ role: 'system', content: `You are a code reviewer for React/TypeScript.
  Review code for: bugs, performance issues, and best practices.
  Format: score out of 10, then bullet points of specific feedback.
  Be direct. Be specific. Always explain WHY.` }
Enter fullscreen mode Exit fullscreen mode

The more specific your system prompt, the better the output. Every time. 🎯


📊 Cost Reality Check 💰

Everyone worries about costs. Here's the actual math:

gpt-4o-mini pricing (most affordable, still great):
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Input:   $0.150 per 1M tokens
Output:  $0.600 per 1M tokens

Real world costs:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
1 average explanation request ≈ $0.0003
1000 requests ≈ $0.30 (thirty paise per 1000!) 😄
Portfolio project with 200 demo uses ≈ $0.06
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Free tier credit covers hundreds of demo uses 🆓
Enter fullscreen mode Exit fullscreen mode

Stop worrying about cost. Start building. 🚀


🚀 Your 2-Hour Build Plan

Set a timer. Right now. ⏱️

Hour 1 — Setup + First Feature
├── 0:00 Get API key + install SDK (10 min)
├── 0:10 Build the API route (20 min)
├── 0:30 Build the frontend component (25 min)
└── 0:55 Test it works locally (5 min)

Hour 2 — Polish + Deploy
├── 1:00 Add error handling (15 min)
├── 1:15 Add loading states (10 min)
├── 1:25 Clean up the UI (15 min)
└── 1:40 Deploy on Vercel (20 min)
    └── 2:00 Share the live link 🎉
Enter fullscreen mode Exit fullscreen mode

You have a working, deployed AI feature in 2 hours. That's it. 🏁


💬 Your Turn!

Have you tried building with AI APIs before? 👇

Drop in the comments:

  • 🟢 "Built something!" — share your link!
  • 🟡 "Starting today" — let's go! 🚀
  • 🔴 "Still scared" — drop your biggest fear, I'll answer it!

And if the "it's just a fetch() call" revelation hit you — drop a 🤯 because same. 😂

Share this with a dev who thinks AI is too advanced for them. Prove them wrong. 🙏

Drop a ❤️ — helps more developers find this before they spend months being intimidated! 🔥


🔖 P.S. — The streaming cursor trick ( with animate-pulse) — use it in every AI project. Tiny detail. Maximum impression. You're welcome. 😄

Top comments (0)