I released GEO Optimizer v4.10.0.
The codename for this release is Veil.
Not because this version adds one huge visible feature, but because it focuses on something more important for the long-term direction of the project: making hidden signals easier to detect, classify, and reason about.
GEO Optimizer started as a simple idea:
Can we audit whether a website is technically and semantically ready to be cited by AI search engines?
That question looks simple.
It is not.
Classic SEO tooling is mostly built around crawling, indexing, ranking, metadata, schema, performance, content structure, links, and discoverability in traditional search results.
Those things still matter.
But AI search adds a different layer.
A page does not only need to be found.
It needs to be understood.
It needs to be trusted.
It needs to be extractable.
It needs to be citeable.
It needs to survive summarization without losing meaning.
It needs to expose enough structure for an AI system to decide, among many possible sources, that this page is worth using in a generated answer.
That is the problem GEO Optimizer is trying to measure.
Not perfectly.
Not magically.
But technically, iteratively, and in the open.
What GEO Optimizer is
GEO Optimizer is an open-source Python toolkit for Generative Engine Optimization.
The goal is to audit, optimize, and monitor whether a website is ready for AI search systems such as ChatGPT, Perplexity, Claude, Gemini, and similar answer engines.
You can run it from the CLI:
pip install --upgrade geo-optimizer-skill
geo audit --url https://example.com
You can audit a sitemap:
geo audit --sitemap https://example.com/sitemap.xml --max-urls 25
You can compare before and after versions of a page:
geo diff \
--before https://example.com/page-old \
--after https://example.com/page-new
You can track changes over time:
geo audit --url https://example.com --save-history --regression
geo history --url https://example.com
You can also generate files and structures that help AI discovery:
geo llms --base-url https://example.com --output ./public/llms.txt
geo schema --type faq --url https://example.com
The basic workflow is intentionally simple.
Give GEO Optimizer a URL.
It returns a score, findings, and recommendations.
The larger goal, however, is not to create another vanity score.
The goal is to build a practical audit layer for a new kind of visibility problem.
Why this release matters
v4.10.0 is not about adding more checklist items.
It is about refining the architecture of the signals.
That distinction matters.
A checklist asks:
- Is there schema?
- Is there a title?
- Is there a meta description?
- Is there an
llms.txtfile? - Are AI crawlers blocked?
Those questions are useful, but they are only the first layer.
A signal-based audit asks deeper questions:
- Is the content actually reliable enough to cite?
- Are claims supported or likely to trigger hallucinations?
- Does the page satisfy the intent of AI search queries?
- Is the site consistent across technical, semantic, and trust layers?
- Can the content be interpreted without forcing an AI system to guess?
- Can future audits compare signal movement over time?
That is the direction of v4.10.0.
It is less about surface-level optimization and more about operationalizing AI search readiness.
1. Hallucination Bait Detection
The first major addition in v4.10.0 is Hallucination Bait Detection.
This is one of the areas I care about most.
When people talk about GEO, they often focus on how to get cited by AI systems.
That is only half of the problem.
The other half is this:
What happens when your content gives an AI system just enough information to sound confident, but not enough information to be accurate?
That is hallucination bait.
A page can unintentionally encourage bad AI output when it contains content patterns such as:
- unsourced statistics;
- absolute claims without evidence;
- speculative statements written as facts;
- vague numerical ranges;
- AI-generated language with no grounding;
- self-citations used as proof;
- missing units in measurements;
- sensitive claims without proper context or disclaimers.
These patterns matter because AI search engines do not only retrieve text.
They interpret it.
They compress it.
They combine it with other sources.
They may cite it.
If your page contains unsupported or ambiguous claims, the problem is not only that the page may rank poorly.
The problem is that it may be cited incorrectly, summarized incorrectly, or avoided entirely because the trust signals are weak.
In v4.10.0, GEO Optimizer detects eight hallucination-bait patterns and assigns severity levels.
The goal is not to punish content.
The goal is to make risky patterns visible before they become reputation problems.
For example, consider this sentence:
This method improves AI visibility by 300% for every website.
That sentence has multiple problems.
It is absolute.
It is unsourced.
It uses a strong statistical claim.
It generalizes across every website.
It is exactly the kind of sentence that can look good in marketing copy and bad in an AI-generated answer.
A better version would be:
In our internal tests on selected pages, this method improved citation readiness scores by up to 30%, depending on content structure, schema quality, and source clarity.
Still not perfect.
But it is more bounded.
More specific.
Less likely to become hallucination bait.
That is the type of distinction GEO tooling needs to surface.
2. AI Search Intent Mapping
The second important addition is AI Search Intent Mapping.
Traditional SEO already works with search intent.
But AI search changes how intent is expressed and satisfied.
A user does not always type a short query anymore.
They may ask:
What is the best way to optimize a website so ChatGPT or Perplexity can cite it?
Or:
Compare the main tools for AI search visibility and explain which one is best for a developer workflow.
Or:
Give me a practical checklist to make my SaaS website more citeable by AI answer engines.
These are not just keywords.
They are tasks.
They combine research, comparison, recommendation, explanation, and action.
v4.10.0 analyzes content across four intent categories:
- informational;
- navigational;
- transactional;
- commercial.
The intent mapping works with pattern matching in both English and Italian.
This matters because many websites are technically optimized, but they do not clearly satisfy the type of question an AI system is trying to answer.
A page may be informative but not comparative.
A page may be commercial but not trustworthy.
A page may describe a product but fail to answer the actual buyer questions.
A page may have strong copy but weak extraction points.
For AI search, this is a problem.
Generated answers often need to map a user query to source passages quickly.
If your content does not expose its purpose clearly, the AI system has to infer too much.
That increases ambiguity.
And ambiguity is bad for citation.
In practical terms, AI Search Intent Mapping helps answer questions like:
- What kind of AI search queries could this page satisfy?
- Which intent categories are underrepresented?
- Is the page useful for comparison-style queries?
- Does it provide enough direct answers?
- Does it contain actionable material or only generic positioning?
- Does the content match the likely prompt patterns users will write?
This is the bridge between content strategy and technical GEO auditing.
3. HTTP retry with exponential backoff
The third addition is less glamorous, but important: HTTP retry with exponential backoff.
Audit tools live or die by reliability.
If a tool audits a website once and fails because of a transient timeout, a temporary 5xx error, a connection reset, or rate limiting, the result is not useful.
In v4.10.0, fetch_url() now retries transient failures with configurable attempts and backoff.
That includes:
- timeouts;
- 5xx responses;
- connection errors;
-
429rate limiting.
This is not a flashy feature.
But it matters for real usage.
Especially when auditing:
- client websites;
- production sites behind CDNs;
- large sitemaps;
- staging environments;
- slower CMS installations;
- sites with intermittent network behavior.
A serious audit tool cannot treat every temporary failure as a final truth.
It needs to distinguish between:
- a real accessibility problem;
- a temporary network problem;
- a server that is rate limiting;
- a target that consistently fails.
That distinction becomes even more important as GEO Optimizer moves toward monitoring, tracking, and longitudinal analysis.
If the audit history is noisy, the insight becomes noisy.
Reliability is part of the signal architecture.
4. Telemetry System
v4.10.0 also introduces a structured telemetry system.
This is local telemetry, backed by SQLite in the user environment.
The goal is not surveillance.
The goal is operational visibility.
GEO Optimizer now tracks structured events with a geo_ prefix, including events such as:
-
geo_audit_run; -
geo_score_improved; -
geo_suggestion_applied; -
geo_api_error; -
geo_badge_generated.
This matters because audits are not one-time artifacts.
A single audit tells you where you are now.
Telemetry helps you understand movement.
What changed?
Which suggestions were applied?
Did the score improve?
Did errors increase?
Are badges being generated?
Are audits being repeated over time?
This is the foundation for more serious workflows:
- dashboards;
- regression detection;
- client reporting;
- CI/CD quality gates;
- historical visibility tracking;
- release impact analysis;
- product analytics for the audit engine itself.
GEO is becoming less of a content trick and more of an operations problem.
Telemetry is part of that transition.
Why I am focusing on signals instead of checklists
The easy version of GEO is a checklist.
Add llms.txt.
Add schema.
Allow AI crawlers.
Write clearer headings.
Add sources.
Use statistics.
Create FAQ sections.
All of that can help.
But if GEO stops there, it becomes shallow very quickly.
A website can pass a checklist and still be weak for AI citation.
For example:
- it can have schema, but poor entity consistency;
- it can have citations, but cite low-quality or circular sources;
- it can have long content, but no extractable definitions;
- it can allow AI crawlers, but block important assets through CDN behavior;
- it can publish many pages, but expose no clear topical authority;
- it can use strong claims, but provide no verification path;
- it can rank well in classic search, but be hard to summarize reliably.
This is why I am moving GEO Optimizer toward signal architecture.
A good GEO audit should not only ask whether something exists.
It should ask whether that thing contributes to citation readiness.
A title tag exists.
But does it clarify the page?
Schema exists.
But does it describe the right entity?
Sources exist.
But are they useful, traceable, and relevant?
Statistics exist.
But are they grounded?
The page has content.
But can a model extract a safe answer from it?
This is the difference between compliance and usefulness.
And AI search rewards usefulness much more than mechanical compliance.
Example workflow
A simple workflow looks like this:
pip install --upgrade geo-optimizer-skill
geo audit --url https://example.com --format rich
Then inspect the weak areas.
If the score is low because the site blocks AI crawlers, fix robots.txt.
If the score is low because schema is missing, add structured data.
If citability is low, improve definitions, source clarity, statistics, and content structure.
If hallucination bait appears, rewrite risky claims.
If intent mapping is weak, restructure the page around the questions users actually ask in AI search systems.
Then run the audit again:
geo audit --url https://example.com --save-history --regression
geo history --url https://example.com
That is the workflow I want GEO Optimizer to support:
- audit the current state;
- identify weak signals;
- apply changes;
- compare results;
- monitor over time;
- catch regressions before visibility drops.
Tests and reliability
This release also continues the focus on test coverage and mocked execution.
v4.10.0 ships with 1425 tests, all mocked, with zero network access during the test suite.
That is important for this kind of tool.
A GEO audit engine touches many unstable surfaces:
- remote websites;
- HTML parsing;
- schema extraction;
- robots rules;
- HTTP behavior;
- content heuristics;
- scoring thresholds;
- output formats;
- local history;
- telemetry;
- API and CLI boundaries.
If tests depend on real websites, the test suite becomes unreliable.
A site changes and your test breaks.
A server times out and your CI fails.
A third-party page updates its markup and your expected output is no longer valid.
Mocked tests keep the engine deterministic.
For an audit tool, determinism is not a luxury.
It is part of trust.
The current CI path is also moving through ruff and gradual mypy, because the project is becoming large enough that type boundaries and static checks are no longer optional niceties.
They are maintenance infrastructure.
What this means for the roadmap
What this means for the roadmap
The public roadmap for GEO Optimizer is now more deliberate.
I do not want to ship noisy releases just to make the repository look active.
I want focused waves.
v4.10.0 is the signal refinement release.
The next direction is broader:
- better retrieval surface analysis;
- deeper scoring recalibration;
- stronger structural pattern recognition;
- clearer reporting;
- better web experience;
- more useful monitoring;
- more reliable citation-quality analysis;
- stronger factual accuracy checks.
The long-term goal is to move GEO Optimizer from a CLI audit tool into a broader ecosystem for AI search visibility.
That means:
- CLI for developers;
- web version for faster testing;
- reports for teams and clients;
- monitoring for change detection;
- MCP integration for AI-assisted workflows;
- structured outputs for CI/CD and automation;
- open-source core for transparency.
I still think the CLI matters.
A lot.
Developers need local, scriptable, inspectable tools.
But not every useful GEO workflow should require living in the terminal.
That is why the web layer matters too.
The CLI gives control.
The web version gives accessibility.
The open-source engine keeps the system inspectable.
GEO is not “SEO is dead”
I do not believe SEO is dead.
I think SEO is becoming more layered.
Traditional search still matters.
Technical SEO still matters.
Content quality still matters.
Schema still matters.
Crawlability still matters.
But AI search introduces new questions:
- Can this page be safely summarized?
- Can this claim be verified?
- Is the entity behind the content clear?
- Does the page expose useful extraction points?
- Would an AI answer engine cite this source?
- Would it cite it accurately?
- Would it choose a competitor instead?
That is the layer I am building for.
The future of organic visibility will not be only about being found.
It will also be about being understood, trusted, and cited.
Try it
Install or upgrade:
pip install --upgrade geo-optimizer-skill
Run an audit:
geo audit --url https://example.com
Try a richer output:
geo audit --url https://example.com --format rich
Explore the project:
- GitHub: https://github.com/Auriti-Labs/geo-optimizer-skill
- Web demo: https://geo-optimizer-web.onrender.com
- Documentation: https://auriti-labs.github.io/geo-optimizer-skill/
- Release v4.10.0: https://github.com/Auriti-Labs/geo-optimizer-skill/releases/tag/v4.10.0
If you try it on your own site, I would be especially interested in feedback around:
- hallucination bait findings;
- intent mapping quality;
- false positives;
- confusing recommendations;
- missing GEO signals;
- reporting clarity;
- CI/CD use cases;
- web version usability.
GEO is still early.
That is exactly why open tools matter.
We need fewer vague claims about “AI visibility” and more inspectable systems that developers, founders, SEO specialists, and content teams can actually test.
That is what I am trying to build with GEO Optimizer.
v4.10.0 is not the final destination.
It is another step toward a more measurable, technical, and transparent approach to AI search visibility.






Top comments (0)