DEV Community

Cover image for 🔎 Step-by-Step Guide to Building a Deep Search AI with DuckDuckGo & OpenRouter 🤖
Rajguru Yadav
Rajguru Yadav

Posted on

🔎 Step-by-Step Guide to Building a Deep Search AI with DuckDuckGo & OpenRouter 🤖

AI-powered search is evolving fast. But you don’t have to wait for the big players—you can build your own Deep Search AI that pulls fresh results from the web (DuckDuckGo) and then asks powerful LLMs via OpenRouter to read, summarize, and refine them into human-friendly answers. 🔥

In this article, I’ll walk you through the full build: from fetching raw search data, to refining it with AI, to wiring up a simple web UI. Let’s go! 🚀

🧠 What Is “Deep Search AI”?

A Deep Search AI pipeline does more than show links. It:

🔍 Fetches raw results from a search source (DuckDuckGo Instant Answer API).

🧵 Extracts key snippets (titles, summaries, related topics, abstracts).

🤖 Feeds that corpus into an AI model (via OpenRouter) to analyze.

🧾 Returns a concise, well-structured answer—optionally with cited sources.

Think of it as: Search ➜ Aggregate ➜ Understand ➜ Answer.

🌐 Why DuckDuckGo + OpenRouter?

DuckDuckGo Instant Answer API is lightweight, fast, and doesn’t require a formal API key for basic JSON responses. Good for bootstrapping, prototyping, and low-friction builds. ⚡

OpenRouter gives you a meta-gateway to many top AI models (GPT-family, Claude, Gemini wrappers, etc.) behind a single API. That means you can compare model responses, route queries, and upgrade later without rewriting your whole stack. 🛣️

🔐 Step 1: Get Access (No-Stress Setup)

1️⃣ DuckDuckGo Instant Answer API (No Key Needed)

You can hit the endpoint directly:

https://api.duckduckgo.com/?q=YOUR_QUERY&format=json&no_redirect=1&no_html=1

💡 Tip: Use no_html=1 to strip HTML tags for cleaner text.

2️⃣ OpenRouter API Key

Create an account at OpenRouter (search "OpenRouter AI" if you need the signup page). 🙌

Generate an API key in your dashboard.

Copy it to a .env file (never hardcode keys in public repos! 🔐).

🛠️ Step 2: Project Setup (Node.js Example)

We’ll build with Node.js + Axios for simplicity.

mkdir deep-search-ai
cd deep-search-ai
npm init -y
npm install axios dotenv express cors
Enter fullscreen mode Exit fullscreen mode

Create a .env file:

OPENROUTER_API_KEY=sk_your_key_here
PORT=3000
Enter fullscreen mode Exit fullscreen mode

Load env vars in code:

require('dotenv').config();
Enter fullscreen mode Exit fullscreen mode

📥 Step 3: Fetch DuckDuckGo Results

We’ll grab JSON and pull out text-like fields from RelatedTopics, AbstractText, etc.

const axios = require("axios");

async function fetchDuckDuckGo(query) {
  const url = `https://api.duckduckgo.com/?q=${encodeURIComponent(query)}&format=json&no_redirect=1&no_html=1`;
  const { data } = await axios.get(url, { timeout: 15000 });

  const snippets = [];

  if (data.AbstractText) snippets.push(data.AbstractText);
  if (Array.isArray(data.RelatedTopics)) {
    for (const item of data.RelatedTopics) {
      if (item.Text) snippets.push(item.Text);
      // Some RelatedTopics entries are nested "Topics" arrays
      if (Array.isArray(item.Topics)) {
        for (const t of item.Topics) {
          if (t.Text) snippets.push(t.Text);
        }
      }
    }
  }

  return snippets;
}

// Quick test
// fetchDuckDuckGo("AI trends 2025").then(console.log).catch(console.error);
Enter fullscreen mode Exit fullscreen mode

✅ Good practice: Cap snippet count or char length before sending to an LLM to save tokens & cost. 💸

🤖 Step 4: Refine with an OpenRouter Model

Here’s a helper that sends your aggregated text to an LLM for summary + answer generation.

const OPENROUTER_KEY = process.env.OPENROUTER_API_KEY;

async function refineWithAI({ query, snippets }) {
  const prompt = `You are a Deep Search AI.\n\nUser Query: ${query}\n\nHere are search snippets (may be noisy or partial). Read them, synthesize key points, and answer clearly. If uncertain, say so.\n\nSnippets:\n${snippets.map((s, i) => `${i + 1}. ${s}`).join('\n')}`;

  const response = await axios.post(
    "https://openrouter.ai/api/v1/chat/completions",
    {
      model: "openai/gpt-4o-mini", // you can swap models later 🤏
      messages: [
        { role: "system", content: "You turn noisy web snippets into clean, sourced answers." },
        { role: "user", content: prompt }
      ],
      temperature: 0.3
    },
    {
      headers: {
        Authorization: `Bearer ${OPENROUTER_KEY}`,
        "Content-Type": "application/json"
      },
      timeout: 30000
    }
  );

  return response.data.choices?.[0]?.message?.content?.trim() || "(No answer returned.)";
}
Enter fullscreen mode Exit fullscreen mode

🧩 Step 5: Put It Together – deepSearch()

Now we create one function that:

1) Gets snippets from DuckDuckGo.

2) Sends them to OpenRouter.

3) Returns a structured response.

async function deepSearch(query) {
  const snippets = await fetchDuckDuckGo(query);
  if (!snippets.length) {
    return { answer: "No results found.", sources: [] };
  }
  const answer = await refineWithAI({ query, snippets });
  return { answer, sources: snippets.slice(0, 5) }; // return top snippets as lightweight "sources"
}

// Example run
deepSearch("Best AI tools in 2025").then(console.log).catch(console.error);
Enter fullscreen mode Exit fullscreen mode

🖥️ Step 6: Minimal Express API (Optional Backend) 🌐

Expose your Deep Search pipeline over a REST endpoint your frontend can call.

const express = require('express');
const cors = require('cors');
const app = express();
app.use(cors());
app.use(express.json());

app.get('/deepsearch', async (req, res) => {
  const q = req.query.q || '';
  if (!q.trim()) return res.status(400).json({ error: 'Missing ?q=' });

  try {
    const result = await deepSearch(q.trim());
    res.json(result);
  } catch (err) {
    console.error('Deep search error:', err.message);
    res.status(500).json({ error: 'Search failed', details: err.message });
  }
});

const PORT = process.env.PORT || 3000;
app.listen(PORT, () => console.log(`Deep Search AI server running on :${PORT}`));
Enter fullscreen mode Exit fullscreen mode

🧪 Step 7: Quick Frontend UI (Copy-Paste Friendly) ✨

Drop this into an index.html and point it to your backend server.

<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="utf-8" />
  <title>Deep Search AI 🔎🤖</title>
  <meta name="viewport" content="width=device-width, initial-scale=1" />
  <style>
    body{font-family:sans-serif;max-width:700px;margin:auto;padding:2rem;background:#0d1117;color:#c9d1d9}
    input{width:70%;padding:.6rem;font-size:1rem;border-radius:4px;border:1px solid #30363d;background:#161b22;color:#c9d1d9}
    button{padding:.6rem 1rem;margin-left:.5rem;font-size:1rem;border:none;border-radius:4px;background:#238636;color:#fff;cursor:pointer}
    button:disabled{opacity:.5;cursor:wait}
    pre{white-space:pre-wrap;word-break:break-word;background:#161b22;padding:1rem;border-radius:4px;overflow:auto}
    .sources{margin-top:1rem;font-size:.9rem;opacity:.8}
    .sources li{margin:.25rem 0}
  </style>
</head>
<body>
  <h1>Deep Search AI 🔎🤖</h1>
  <p>Type a question and I’ll search + summarize! ✨</p>
  <input id="query" placeholder="e.g. What is DuckDuckGo?" />
  <button id="go">Search</button>
  <div id="status"></div>
  <h2>Answer</h2>
  <pre id="answer">—</pre>
  <h3>Sources</h3>
  <ul id="sources"></ul>

  <script>
    const goBtn = document.getElementById('go');
    const queryEl = document.getElementById('query');
    const answerEl = document.getElementById('answer');
    const sourcesEl = document.getElementById('sources');
    const statusEl = document.getElementById('status');

    async function runSearch(){
      const q = queryEl.value.trim();
      if(!q){alert('Enter a query');return;}
      goBtn.disabled = true;statusEl.textContent='Searching... ⏳';
      answerEl.textContent='';sourcesEl.innerHTML='';
      try {
        const r = await fetch(`/deepsearch?q=${encodeURIComponent(q)}`);
        const data = await r.json();
        if(data.error){throw new Error(data.error);}
        answerEl.textContent=data.answer||'(no answer)';
        (data.sources||[]).forEach(src=>{
          const li=document.createElement('li');
          li.textContent=src;
          sourcesEl.appendChild(li);
        });
        statusEl.textContent='Done ✅';
      } catch(err){
        statusEl.textContent='Error ❌';
        answerEl.textContent=err.message;
      } finally {
        goBtn.disabled=false;
      }
    }

    goBtn.addEventListener('click',runSearch);
    queryEl.addEventListener('keydown',e=>{if(e.key==='Enter')runSearch();});
  </script>
</body>
</html>
Enter fullscreen mode Exit fullscreen mode

📏 Optional Enhancements (Level Up!)

  • 📚 Citations: Include DuckDuckGo FirstURL and show clickable source bullets.

  • 🧮 Token Budgeting: Truncate or cluster snippets before sending to LLMs.

  • 🧭 Multi-Model Routing: Try multiple OpenRouter models and ensemble answers.

  • 🌍 Language Detection: Auto-detect language & translate user query if needed.

  • 🧠 Memory / Caching: Cache recent search+answer pairs to reduce latency & cost.

  • 🧵 Streaming UI: Show incremental model output for long answers.

#✅ Wrap-Up#

You now have a working Deep Search AI prototype that:

  • Pulls web data from DuckDuckGo 📡

  • Synthesizes meaning using OpenRouter LLMs 🧠

  • Serves clean answers in a simple UI 💬

This foundation is strong enough to extend into multi-AI consultation, image generation triggers, or voice-driven search—all things we’ll cover in future posts. Stay tuned! 🔔

*Build your own Deep Search AI in Node.js using DuckDuckGo + OpenRouter to fetch, analyze, and summarize live web results into clean, conversational answers. Includes code, UI, and pro tips.
*

💬 Your Thoughts?

What do you think about this Deep Search AI guide?

Did you try it? Share your results! 🔥

Any cool features you’d add?

Have a favorite LLM model for summarizing?

👇 Drop a comment below and let’s discuss! 🚀

_________ RAJ GURU YADAV

Top comments (0)