DEV Community

Cover image for I built a product "BS Detector" using Gemini 2.0 Flash and AI Studio 🕵️‍♂️
Paige Bailey for Google AI

Posted on

I built a product "BS Detector" using Gemini 2.0 Flash and AI Studio 🕵️‍♂️

Let's be honest: online product reviews are often... broken. You see a gadget with 4.8 stars, but when you dig in, the reviews are either bots, "incentivized" reviews, or people who used the product for 5 minutes.

I got tired of tab-switching between Amazon, Reddit, and YouTube to find the actual truth about a product. So, I built a Chrome Extension to do it for me! It’s called The BS Detector.

It lives in the Google Chrome Side Panel, scrapes the product name, and uses Google Gemini 2.0 Flash (with Search Grounding) to cross-reference the product against real discussions on Reddit and independent forums. You could also use Gemini 2.5 Flash-Lite, or Gemini 2.5 Flash.

Here is how I built it, the tech stack I used, and the "Aha!" moment regarding JSON schemas.

The Tech Stack

I wanted to keep this lightweight, which meant no backend server. That means that the user has to supply their own API key, but the footprint for project files is pretty minimal:

  • Manifest V3: The standard for modern Chrome Extensions.
  • Chrome Side Panel API: Better than a popup because it stays open while you browse.
  • Gemini 2.0 Flash: Fast, cheap (free tier available), and supports Google Search Grounding.
  • Vanilla JS & CSS: Because sometimes you don't need a framework.

1. The manifest and side panel

Popups are annoying for research tools because they close when you click away. The Side Panel is the perfect UX for this.

In manifest.json, we define the behavior:

"side_panel": {
  "default_path": "sidepanel.html"
},
"permissions": ["sidePanel", "activeTab", "scripting", "storage"]
Enter fullscreen mode Exit fullscreen mode

And in background.js, we ensure the panel opens when the icon is clicked:

chrome.sidePanel
  .setPanelBehavior({ openPanelOnActionClick: true })
  .catch((error) => console.error(error));
Enter fullscreen mode Exit fullscreen mode

2. Getting the context

When the user clicks "Analyze," we need to know what they are looking at. I used chrome.scripting to inject a quick function into the active tab to grab the title.

We look for specific Amazon ID selectors first, then fall back to the <h1> tag.

chrome.scripting.executeScript({
    target: { tabId: tab.id },
    func: () => {
        const amzn = document.getElementById('productTitle');
        const h1 = document.querySelector('h1');
        return amzn ? amzn.innerText : (h1 ? h1.innerText : document.title);
    }
}, (results) => {
    // Update UI with product title
});
Enter fullscreen mode Exit fullscreen mode

3. Gemini 2.0 Flash + Search grounding

This is where the magic happens. It isn't a secret that LLMs hallucinate - as an example, if I ask GPT-4 about a specific generic dropshipped air purifier, it might make up features or capabilities.

To fix this, I used Gemini's Search Grounding. This allows the model to query Google Search live during generation.

Here is the prompt strategy I used in sidepanel.js:

const prompt = `
You are a cynical consumer investigator. Analyze this product: "${productTitle}".

1. Use Google Search to find discussions on Reddit, YouTube, and independent forums.
2. Ignore marketing fluff. Look for "dealbreakers".
3. Determine if this is a high-quality item or generic "dropshipped" junk.
`;
Enter fullscreen mode Exit fullscreen mode

I pass tools: [{ googleSearch: {} }] in the API payload. This tells Gemini: "If you don't know, Google it."

4. Controlled JSON output

The biggest pain in AI engineering is parsing the response. You usually get markdown, backticks, or conversational filler ("Here is the JSON you asked for...").

Gemini 2.0 supports Controlled Generation via responseSchema. You can define exactly what the JSON should look like, and the API enforces it. Which means no more Regex parsing!

generationConfig: {
    responseMimeType: "application/json",
    responseSchema: {
        type: "OBJECT",
        properties: {
            real_score: { type: "NUMBER" },
            verdict: { type: "STRING" },
            dealbreakers: { 
                type: "ARRAY", 
                items: { type: "STRING" } 
            },
            pros: { 
                type: "ARRAY", 
                items: { type: "STRING" } 
            },
            source_count: { type: "NUMBER" }
        },
        required: ["real_score", "verdict", "dealbreakers", "pros"]
    }
}
Enter fullscreen mode Exit fullscreen mode

Because of this schema, in my fetch request, I can simply do:

const data = await response.json();
const analysis = JSON.parse(data.candidates[0].content.parts[0].text);
// It just works. Every time.
Enter fullscreen mode Exit fullscreen mode

The results

The extension takes the product title, reads through the "BS" marketing, checks Reddit threads about the item, and outputs:

  1. A Real Sentiment Score: (e.g., 6.5/10).
  2. The Verdict: A concise summary of actual user experiences.
  3. Dealbreakers: The stuff Amazon hides (e.g., "proprietary charging cable," "app requires login").

TL;DR

This project took a few minutes to build in AI Studio, but saves me time every time I shop. It’s a great example of how powerful Search Grounding is when combined with client-side extensions. You don't need a massive backend to build useful AI tools anymore.

Future Improvements:

  • Add history storage to compare products.
  • Analyze the price history as well, to see how it's changed!
  • Detect specific "fake review" patterns in the text itself.

The code is open source (link below). Go fork it and stop buying junk: https://github.com/dynamicwebpaige/product-bs-detector

Top comments (0)