DEV Community

Cover image for I Built a Chrome Extension to Measure AI Visibility — Here’s What I Learned
Ali Farhat
Ali Farhat Subscriber

Posted on

I Built a Chrome Extension to Measure AI Visibility — Here’s What I Learned

I have been working with SEO, content systems, and automation for years, and recently something started to break. Pages that should perform well based on every traditional metric were simply not showing up in AI generated answers. Not occasionally, but consistently. Strong domains, solid backlinks, well written content. Ignored.

At first, I assumed it was noise. Maybe sampling issues, maybe inconsistent prompting, maybe just coincidence. But after testing across multiple sites, industries, and content types, the pattern became impossible to ignore.

AI systems are not ranking content.

They are selecting it.

And that single shift changes everything.

The Problem Most People Are Missing

Most teams are still optimizing for visibility in search engines. Rankings, impressions, CTR, backlinks. All the familiar metrics.

But users are shifting behavior faster than most teams can adapt. Instead of clicking through results, they are asking questions and consuming answers directly inside AI systems.

That creates a hidden problem.

Your content might still rank.

Your traffic might still look stable.

But your visibility inside AI systems can silently drop to zero.

And you would not even notice it.

Why Traditional SEO Breaks in AI Systems

The difference is not subtle. It is structural.

Traditional SEO AI Visibility
Ranking determines exposure Selection determines exposure
Multiple results compete One answer dominates
Authority is critical Clarity and structure dominate
Users compare sources Users consume one answer

Search engines distribute attention. AI systems concentrate it.

That means the margin for error is gone. If your content is not selected, it does not matter how good it is. It simply does not exist in that interaction.

So I Built a Tool to Test This

I needed a way to validate what I was seeing. Not assumptions, not opinions, but something measurable.

The idea was simple.

Open a page.

Run an analysis.

Understand instantly if it is likely to be selected by AI.

That became GEO Checker.

Not another SEO tool. Not another dashboard. Just a fast way to answer a question most teams are not even asking yet.

What the Chrome Extension Actually Does

The Chrome Extension analyzes any page you visit and gives you an AI visibility score. But the important part is how that score is derived.

It focuses on how usable your content is for an AI system.

At a high level, it evaluates:

  • Structural clarity of the document
  • Logical grouping of information
  • Explicitness of key concepts
  • Ease of extracting direct answers

In other words, it measures how well your content can be interpreted, not how well it can rank.

A Look Under the Hood

The core idea is to simulate how an AI system processes a page without actually replicating a full LLM pipeline.

Instead of treating a page as a single block of text, the content is broken down into smaller semantic units. Headings, sections, and logical chunks are analyzed individually and then combined into an overall score.

A few of the key signals:

  • Information density

    Is the content actually delivering value, or just filling space with generic phrasing

  • Context independence

    Can a section stand on its own, or does it rely on external assumptions

  • Answer proximity

    How quickly a direct answer appears after introducing a topic

  • Structural consistency

    Does the layout help or hinder interpretation

  • Ambiguity reduction

    Are terms clearly defined, or left open to interpretation

This is not about perfectly emulating AI behavior. It is about approximating the conditions under which content gets selected.

And that is enough to expose major weaknesses.

What Changed in Version 2.0

The first version proved the concept. But in practice, the real value was not the score itself. It was the iteration loop.

Version 2.0 focuses on that.

Instead of just analyzing pages, it helps you improve them over time.

Key upgrades:

  • URL memory

    The extension remembers pages you have analyzed and instantly retrieves the last known score

  • History tracking

    Every scan is stored locally in your browser, so you can track improvements over time without relying on external storage or guesswork.

  • Badge score on the icon

    You see the score immediately while browsing, without opening the tool

These changes sound small, but they fundamentally change how you work. You move from one time analysis to continuous optimization.

What I Learned From Testing Real Pages

After running this across dozens of sites, the patterns were consistent.

High ranking content often gets ignored when it is not explicit enough. Many pages assume context that AI systems do not infer. Structure plays a bigger role than most teams expect.

Some patterns that kept repeating:

  • Direct answers outperform long introductions
  • Vague language reduces selection probability significantly
  • Long paragraphs decrease interpretability
  • Clear sectioning improves extraction
  • Redundant phrasing lowers information density

What stood out most was how often small structural changes had a bigger impact than rewriting entire pages.

Where Most Content Fails

Most content today is written for human readers. That is still important, but it is no longer sufficient.

AI systems require content that is:

  • Easy to parse
  • Explicit in meaning
  • Contextually complete
  • Structurally predictable

The gap between those requirements and how content is currently written is where visibility is lost.

How This Changes the Way You Work

This is not about replacing SEO. It is about extending it.

You are no longer optimizing only for discovery. You are optimizing for interpretation and extraction.

That means shifting your mindset:

Instead of asking

“Will this page rank?”

You start asking

“Can this page be used as an answer?”

That is a very different question.

How I Use It Day to Day

The workflow is intentionally simple.

  • Open a page
  • Check the score
  • Adjust structure or clarity
  • Re test

Within seconds, you know whether your changes made an impact.

This removes guesswork and replaces it with feedback.

If You Are Building Content, This Matters

If your growth depends on content, this shift will affect you. Not immediately, but gradually.

You will start seeing:

  • Certain pages lose effectiveness
  • Competitors appear in AI answers
  • Traffic sources shift over time

The biggest risk is not that this is happening. The biggest risk is that you are not measuring it.

Try It Yourself

I built this because I needed a way to understand what was happening.

If you are working on SEO, content, or growth, test your own pages. It takes seconds to see whether a page is strong or weak in terms of AI visibility.

Once you see the patterns, it changes how you think about content.

Final Thought

We are moving from ranking systems to selection systems.

That is not a trend. It is a structural shift.

Most teams are still optimizing for the old model. That creates a temporary advantage for those who adapt early.

The question is simple.

Are you going to be selected, or ignored?

Top comments (9)

Collapse
 
rolf_w_efbaf3d0bd30cd258a profile image
Rolf W

The part about structure and extractability is interesting. We recently refactored a few articles to be more direct and noticed they started showing up more often in AI generated answers. We did not track it formally though. This feels like something that should have existed earlier.

Collapse
 
alifar profile image
Ali Farhat

Yes, that aligns exactly with what I kept seeing.

Small structural changes often have more impact than rewriting entire sections. Especially making answers more explicit and reducing ambiguity.

Most teams are not tracking this yet, which is why it feels a bit like guesswork without something measurable.

Collapse
 
bbeigth profile image
BBeigth

This actually explains something I have been noticing but could not really pinpoint. Some of our pages rank well but never seem to get picked up in AI answers. I always assumed it was randomness, but this “selection vs ranking” framing makes a lot of sense.

Curious if you noticed patterns across industries or if this is mostly consistent regardless of niche?

Collapse
 
alifar profile image
Ali Farhat

That is exactly what triggered this for me as well.

From what I have seen, the pattern is surprisingly consistent across niches. The differences are more in how strict the selection is, not in the underlying behavior.

Highly competitive spaces just expose the problem faster, but even in smaller niches the same rules seem to apply.

Collapse
 
megallmio profile image
Socials Megallm

interesting project! guessing javascript is a big part of the extension logic for interacting with the dom and chrome's api. curious if you hit any performance bottlenecks with that.

Collapse
 
jan_janssen_0ab6e13d9eabf profile image
Jan Janssen

This reminds me a bit of featured snippets back in the day, where structure suddenly mattered a lot more than people expected. Except this feels like a much bigger version of that shift.

Collapse
 
alifar profile image
Ali Farhat

That is actually a very good comparison.

Featured snippets were an early signal of this direction, but AI takes it much further. Instead of extracting a fragment, it builds a full answer.

So the requirements for clarity and structure become even stricter.

Collapse
 
ali_e97e4fa82de1024780940 profile image
GetTraxx

I like the idea, but I wonder how far you can go with approximating AI behavior without actually querying models. Did you experiment with that or is this more of a heuristic approach?

Collapse
 
alifar profile image
Ali Farhat

Querying models directly gives useful signals, but it introduces noise and inconsistency depending on prompts and context. For this use case, I found that a structured heuristic approach is more stable for iteration.

It is not about perfectly simulating AI, but about identifying the patterns that consistently correlate with selection.