DEV Community

Cover image for Building a Global Narrative Warfare Map with Bright Data, Tavily, Ollama, React, and Three.js
Harish Kotra (he/him)
Harish Kotra (he/him)

Posted on

Building a Global Narrative Warfare Map with Bright Data, Tavily, Ollama, React, and Three.js

What if a single search box could reveal how the same geopolitical topic is framed differently in Washington, London, Tel Aviv, New Delhi, Tehran, Berlin, or Ankara?

That was the goal behind Reality Rift: a web application that discovers live coverage, scrapes grounded evidence, reasons over it with an LLM, and projects the result onto an interactive 3D globe.

This post walks through:

  • the product idea
  • the architecture
  • the data pipeline
  • the visualization layer
  • the caching strategy
  • the transparency model

The Problem

Most search tools answer:

“What articles exist about this topic?”

But researchers, journalists, strategists, and builders often need to answer:

“How is this story being framed differently across countries, and what evidence supports that?”

That requires more than search.

It requires:

  • multi-source discovery
  • article grounding
  • country inference
  • narrative clustering
  • explainable provenance
  • visual storytelling

The Stack

UI

  • React
  • Vite
  • Tailwind CSS
  • Three.js
  • three-globe

Backend

  • Node.js
  • Express
  • Axios

Data + AI

  • Bright Data Discover API
  • Bright Data scraping flow
  • Tavily
  • Ollama with gemma4:latest
  • OpenAI-compatible provider support

System Architecture

System Architecture


Search Layer

One of the early lessons was that a single-country search viewpoint underperformed badly on global topics.

So the app now fans out Bright Data Discover across multiple countries and merges that with Tavily.

Example: multi-country Discover

const settled = await Promise.allSettled(
  countries.map((country) =>
    discoverSearch(topic, {
      numResults: perCountry,
      country,
      language: options.language,
      intent: options.intent
    })
  )
);
Enter fullscreen mode Exit fullscreen mode

This matters because a topic like Iran war or Ukraine war should not be judged from one country’s SERP alone.


Grounded Evidence

Search results alone are not enough.

The app scrapes a curated subset of URLs and sends trimmed, grounded excerpts into the reasoning layer.

Example: scrape path

return {
  ...result,
  content: scraped.text,
  inferredCountry: geo?.name ?? country ?? null,
  scrapeProvider: "brightdata"
};
Enter fullscreen mode Exit fullscreen mode

Each source record retains:

  • publisher
  • URL
  • search provider
  • scrape provider
  • inferred country

That enables a transparent evidence deck in the UI.


LLM Reasoning

The model receives:

  • merged search results
  • scraped excerpts
  • provider provenance
  • instructions to return strict JSON

Prompt excerpt

Rules:
- Return 8-15 countries whenever the evidence supports it.
- Use source grounding fields so every narrative is tied to explicit URLs and publishers.
- Always include a short "stanceRationale".
- Return valid JSON only with no markdown and no explanation.
Enter fullscreen mode Exit fullscreen mode

Output shape

{
  "countries": [
    {
      "name": "India",
      "lat": 20.5937,
      "lng": 78.9629,
      "narrative": "Frames the conflict through regional stability and strategic autonomy.",
      "stanceRationale": "Coverage stresses de-escalation while protecting national interests.",
      "stance": "neutral",
      "confidence": 0.78,
      "intensity": 0.67,
      "sources": ["https://..."]
    }
  ],
  "connections": [
    {
      "from": "India",
      "to": "UAE",
      "strength": 0.72
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

Recovering from Weak Model Output

A major failure mode in narrative systems is this:

  • search finds plenty of evidence
  • scrape works
  • model still returns 2 countries

That’s not acceptable for globally-covered topics.

So the backend now includes a recovery strategy:

  1. detect implausibly low country coverage relative to the evidence
  2. retry with a stricter “you under-returned countries” instruction
  3. if still weak, supplement missing countries from grounded evidence

Recovery check

function shouldRetryForCoverage(result, input) {
  const evidenceCountries = getEvidenceCountryCounts(input);
  const strongEvidenceCountries = [...evidenceCountries.values()].filter((count) => count >= 2).length;
  return (result?.countries?.length || 0) < 4 && strongEvidenceCountries >= 4;
}
Enter fullscreen mode Exit fullscreen mode

This is one of the most important engineering choices in the project:

don’t blindly trust the model if the evidence says the output is incomplete.


Transparency by Design

One of the project’s goals is to be explainable.

The UI now exposes not just source URLs, but also:

  • which search provider found the source
  • whether it came from Bright Data Discover, Tavily, or both
  • whether scraping was done via Bright Data, direct fetch, or fallback

Example transparent source object

{
  url: "https://example.com/article",
  title: "Regional response to conflict",
  publisher: "example.com",
  searchProvider: "hybrid",
  searchProviderLabel: "Bright Data + Tavily",
  scrapeProvider: "brightdata",
  discoveredBy: ["brightdata", "tavily"]
}
Enter fullscreen mode Exit fullscreen mode

This is what makes the app feel credible rather than magical.


Visualization Layer

The map uses:

  • a textured globe
  • atmosphere and graticules
  • grounded country markers
  • pulsing rings
  • animated arcs
  • hover and click interaction

Why not just use tooltips?

Because the value is not merely in interaction.

It’s in turning abstract narrative clusters into a spatial mental model.

That’s why a globe works so well here:

  • narrative stance becomes geographic
  • similarities become arcs
  • intensity becomes pulse density
  • confidence becomes glow strength

Loading Experience

The app also exposes pipeline steps during long-running requests:

  • Discovering coverage
  • Grounding source content
  • Reconciling narratives
  • Projecting the map

This matters because AI apps often fail the “is it doing anything?” test.


Caching

The backend uses in-memory caches for:

  • Discover search
  • merged search results
  • scrape outputs
  • pipeline inputs
  • LLM outputs
  • final responses

This reduces:

  • repeated API costs
  • repeated scrape time
  • unnecessary inference calls

Where This Can Go Next

Some obvious follow-on features:

  • language-specific narrative comparisons
  • timeline / drift view
  • exportable country briefings
  • regional polygon overlays
  • fact-claim extraction per country
  • model comparison mode
  • historical replay

The most interesting part of this project is not the globe.

It’s the combination of:

  • multi-source discovery
  • grounded evidence
  • model reasoning
  • transparent provenance
  • visual synthesis

That combination turns a normal search interface into a narrative intelligence product.

Github Repo: https://github.com/harishkotra/reality-rift

Top comments (0)