DEV Community

Cover image for I Built a Chrome Extension to Measure AI Visibility — Here’s What I Learned

I Built a Chrome Extension to Measure AI Visibility — Here’s What I Learned

Ali Farhat on April 07, 2026

I have been working with SEO, content systems, and automation for years, and recently something started to break. Pages that should perform well ba...
Collapse
 
rolf_w_efbaf3d0bd30cd258a profile image
Rolf W

The part about structure and extractability is interesting. We recently refactored a few articles to be more direct and noticed they started showing up more often in AI generated answers. We did not track it formally though. This feels like something that should have existed earlier.

Collapse
 
alifar profile image
Ali Farhat

Yes, that aligns exactly with what I kept seeing.

Small structural changes often have more impact than rewriting entire sections. Especially making answers more explicit and reducing ambiguity.

Most teams are not tracking this yet, which is why it feels a bit like guesswork without something measurable.

Collapse
 
gronrafal profile image
Rafał Groń

The "selection vs ranking" framing is spot on. I've been experiencing this from the product side — I built an AI search plugin for WooCommerce, and started tracking when AI systems began recommending it.
The turning point was when I noticed ChatGPT's crawler hitting my site, and then confirmed a real visitor arriving via utm_source=chatgpt.com who clicked through to pricing. That was not from SEO. That was from being selected.
What I found matches your observations — structured, explicit content with clear answers performed way better than traditional marketing pages. I wrote about the practical side of optimizing for this: How to Make Your WooCommerce Store Discoverable by ChatGPT
Would be curious to test your extension on some of my pages and compare with what I'm seeing in actual AI referral data.

Collapse
 
bbeigth profile image
BBeigth

This actually explains something I have been noticing but could not really pinpoint. Some of our pages rank well but never seem to get picked up in AI answers. I always assumed it was randomness, but this “selection vs ranking” framing makes a lot of sense.

Curious if you noticed patterns across industries or if this is mostly consistent regardless of niche?

Collapse
 
alifar profile image
Ali Farhat

That is exactly what triggered this for me as well.

From what I have seen, the pattern is surprisingly consistent across niches. The differences are more in how strict the selection is, not in the underlying behavior.

Highly competitive spaces just expose the problem faster, but even in smaller niches the same rules seem to apply.

Collapse
 
megallmio profile image
Socials Megallm

interesting project! guessing javascript is a big part of the extension logic for interacting with the dom and chrome's api. curious if you hit any performance bottlenecks with that.

Collapse
 
alifar profile image
Ali Farhat

Yeah, JavaScript does most of the heavy lifting on the extension side, especially for DOM parsing and structuring the content before analysis.

Performance-wise, the main challenge is not the parsing itself, but keeping everything responsive while analyzing different types of pages. Some pages are clean and fast, others are messy and require more processing.

The approach I took was to keep the client-side work lightweight and focus on extracting just the relevant parts, so you avoid unnecessary overhead. That keeps it fast enough to use in a normal browsing workflow.

Collapse
 
hubspottraining profile image
HubSpotTraining

The local storage choice is a nice touch. A lot of tools in this space immediately send everything to external APIs.

Collapse
 
alifar profile image
Ali Farhat

Good point. The analysis is designed to be lightweight and focused, so you can quickly evaluate pages without complex setup.

The goal is to make it practical to use in real workflows, without slowing you down.

Collapse
 
ali_e97e4fa82de1024780940 profile image
GetTraxx

I like the idea, but I wonder how far you can go with approximating AI behavior without actually querying models. Did you experiment with that or is this more of a heuristic approach?

Collapse
 
alifar profile image
Ali Farhat

Querying models directly gives useful signals, but it introduces noise and inconsistency depending on prompts and context. For this use case, I found that a structured heuristic approach is more stable for iteration.

It is not about perfectly simulating AI, but about identifying the patterns that consistently correlate with selection.

Collapse
 
jan_janssen_0ab6e13d9eabf profile image
Jan Janssen

This reminds me a bit of featured snippets back in the day, where structure suddenly mattered a lot more than people expected. Except this feels like a much bigger version of that shift.

Collapse
 
alifar profile image
Ali Farhat

That is actually a very good comparison.

Featured snippets were an early signal of this direction, but AI takes it much further. Instead of extracting a fragment, it builds a full answer.

So the requirements for clarity and structure become even stricter.