A blank white page became the #1 cited source in Perplexity within 36 hours — no visible content, just seven layers of machine-readable signals hidden underneath. This is a reusable agent skill that lets you audit and scaffold those same layers on any website.
The seven layers:
- Semantic meta tags + VibeTags (brand signals for crawlers)
- JSON-LD structured data (schema.org (http://schema.org/) — Organization, Person, FAQPage, Service)
- sr-only narrative (DOM content accessible to screen readers and AI, no visual render)
- Microdata attributes (inline entity markup)
- llms.txt (emerging standard — like robots.txt but for LLMs)
- reasoning.json (Ed25519-signed claims via the Agentic Reasoning Protocol / IETF draft-deforth-arp-00)
- /.well-known/ai-manifest.json (AI bot discovery manifest)
Includes an audit script that scores any URL 0–100 across all seven layers, and a scaffold script that generates the full stack from a YAML config. Works as an agent skill for Claude Code, OpenClaw, Codex CLI, Cursor — or standalone via the Python scripts.
Based on the phantomauthority.ai (http://phantomauthority.ai/) experiment by Sascha Deforth, who proved that RAG systems have zero content provenance verification and published the fix as an IETF Internet-Draft. We use it honestly — backing every layer with real content and real claims — for AI visibility work with businesses in Phnom Penh.
Repo: https://github.com/blkfoxco/geo-ghost-stack-skill
Live example: https://openclawphnompenh.com (https://openclawphnompenh.com/) (llms.txt, JSON-LD, meta layers deployed)
Original experiment: https://phantomauthority.ai (https://phantomauthority.ai/)
ARP Protocol: https://arp-protocol.org (https://arp-protocol.org/)
Top comments (0)