DEV Community

Cover image for RankGap: Multi-Agent Amazon SEO & Product Visibility Analyzer
Abhi
Abhi

Posted on

RankGap: Multi-Agent Amazon SEO & Product Visibility Analyzer

n8n and Bright Challenge: Unstoppable Workflow

This is a submission for the AI Agents Challenge powered by n8n and Bright Data

What I Built

Selling on Amazon is one thing; being discovered is another. RankGap is a multi-agent AI system designed to uncover Amazon ranking blind spots for your products using real-time data. It identifies where your product ranks, which search queries you’re missing, and how to optimize for better visibility.

Unlike generic reports, RankGap delivers actionable Amazon SEO insights by combining intelligent search keyword/query generation, ranking analysis, and gap detection.


Why This Matters

Being discovered on Amazon isn’t just luck — it’s about strategic keyword targeting and SEO-aware visibility tracking.

Key challenges sellers face:

  • Keyword uncertainty: Many sellers don’t know which queries actually drive traffic.
  • Manual rank tracking is slow: Monitoring dozens of queries across multiple products is tedious.
  • Visibility gaps cost revenue: Missing even one relevant search query can drastically affect sales.
  • Real-time complexity: Tracking and analyzing search data dynamically is challenging without automation.

💡 Around 60% of product discoveries on Amazon happen via keyword or query searches.


How it works?

  1. User enters an Amazon product URL (the target product) and selects how many search queries to generate.

  2. The app scrapes the target product details (title, brand, description, features, etc.) using Bright Data.

  3. An LLM generates n search queries based on the product information.

  4. Each query is run on Amazon; the top 20 results are scraped via Bright Data Web Unlocker.

  5. Full product details for those top 20 results are collected using Bright Data’s Amazon scraper.

  6. The target product and its competing results are then analyzed by three specialized LLMs, each receiving only the data it needs.


Demo

Try out the Live App - RankGap


n8n Workflow

There are 3 workflows involved:

  1. Main Workflow
  2. Download Product Data from Amazon
  3. Analysis of the scraped product data

Main Workflow:

The main workflow is the entry point of the entire pipeline. It executes the following steps:

  1. The pipeline is triggered via a webhook (a POST request) with the following payload:
{
    "url": "https://www.amazon.com/dp/B07QTVRF3J", -- amazon product url
    "queryCount": 3, -- number of search queries to generate
    "execution_id": <uuid> -- identifying workflow end result from supabase
}
Enter fullscreen mode Exit fullscreen mode
  1. The given Amazon product is scraped using the Download Product Data from Amazon workflow, returning the full attribute list from Bright Data.

  2. A Search Query Generator agent produces multiple relevant search queries (see AI agents table below for prompts and details).

  3. Search results for each query are scraped using Bright Data Unlocker. The Amazon search URL pattern is:
    https://www.amazon.com/s?k=

  4. The HTML returned is parsed, and the top 20 product ASINs are extracted.

  5. The Download Product Data from Amazon workflow scrapes detailed info for these top 20 products.

  6. Data transformations are applied to retain only relevant attributes.

  7. Both the target product and competitors are sent to the Analysis of the Scraped Product Data workflow.

  8. The final analysis is stored in a Supabase database, linked via the execution_id passed at trigger time.

Main Workflow

Total number of n8n nodes in this workflow - 19

Workflow json can be found here

Download Product Data from Amazon Workflow:

This workflow’s sole job is to scrape Amazon product data using the Bright Data web scraper. Steps:

  1. Input is a list of product URLs in the following format:
[
    {
        "url": "https://www.amazon.com/dp/B07DSVRF2J"
    },
    {
        "url": "https://www.amazon.com/dp/B07ABCDF2J"
    }
]
Enter fullscreen mode Exit fullscreen mode
  1. Bright Data Verified Node triggers a collection using the Trigger Collection By URL operation, returning a snapshot ID.
  2. Using Monitor Snapshot and an n8n Loop node, the workflow waits until the snapshot is ready.
  3. Once complete, the Download Snapshot operation retrieves product data.

Download Product Data from Amazon

Total number of n8n nodes - 8

Workflow json can be found here

Analysis of the scraped product data Workflow:

This workflow performs structured analysis on the target product and its competitors:

  1. Input: Target product JSON + search results JSON.
  2. Sent to the Visibility & Presence Analysis Agent for ranking visibility.
  3. Sent to the Competitor & Attribute Gap Analysis Agent for feature/attribute gaps.
  4. Sent to the Final Recommendations & Action Plan Agent for actionable next steps.

Analysis

Workflow json can be found here

Combined total nodes - 19 + 18 + 8 = 45


Technical Implementation

There are 4 AI agents in the complete workflow. Refer the following table for detailed info:

Agent Name Functionality LLM Model BackUp Model System Prompt Instructions Memory Tools
Search Query Generator Generate natural language keyword and search queries based on input product data GPT-4o Gemini 2.5 Pro https://github.com/Better-Boy/RankGap/blob/main/prompts.txt#L1 NA NA
Visibility & Presence Analysis Agent Analyze visibility and presence of the target product in the search results GPT-5-mini Gemini 2.5 Pro https://github.com/Better-Boy/RankGap/blob/main/prompts.txt#L20 NA NA
Competitor & Attribute Gap Analysis Analyze attribute gap between the target and competitor products GPT-5-mini Gemini 2.5 Pro https://github.com/Better-Boy/RankGap/blob/main/prompts.txt#L71 NA NA
Final Recommendations & Action Plan Agent Final Recommendation on what can be done better GPT-5-mini Gemini 2.5 Pro https://github.com/Better-Boy/RankGap/blob/main/prompts.txt#L122 NA NA

Bright Data Verified Node

The following operations from bright data verified node are used:

  • Scraping Amazon search page HTML via Web Unlocker
  • Submitting multiple URLs for product scraping via Trigger Collection by URL
  • Monitoring snapshot status
  • Downloading product data snapshots

Application

Tech stack:

  • Backend - n8n cloud webhook
  • Database - supabase to store results
  • Frontend - React + vite + tailwind
  • Deployment - Vercel

Frontend codebase - https://github.com/Better-Boy/RankGap


Screenshots:

Landing Page & Input:

input

Results:


Journey

This was my first time working with both n8n and Bright Data. While there was a learning curve, the community support on discord and the documentation made the process smooth.

Key Learnings:

  • Bright Data’s strength in web data collection became clear when my manual searches kept triggering captchas.

  • n8n streamlined the entire backend process, saving countless hours of work. It's exhaustive set of nodes is powerful enough to build even the most complex workflows.

Challenges Encountered & How I Overcame Them:

  • Even when a Bright Data snapshot shows as “ready,” downloading sometimes fails with status: building. Increasing the wait time from 5s → 10s solved this.

  • In production, cloudflare times out requests after 100 seconds, while n8n workflows can run up to 3 minutes. My workflow often needed 3–5 minutes, so the webhook returned immediately triggering the pipeline as a background process. The pipeline would store the results in supabase with UUID execution_id given by the frontend. By long-polling Supabase from the frontend, I ensured results were fetched once available.

Building RankGap required handling the complexities of Amazon search, real-time scraping, and nested n8n loops — but the result is a powerful system that makes Amazon SEO more transparent and actionable.

If you found this useful, press ❤️ Like or drop a comment.
Thanks for reading! 🚀

Top comments (0)