Guide
See adjacent URLs without spinning up a bespoke crawler
Beyond the primary fetch MentionVox can extend a limited crawl within the same registrable domain so teams spot systemic issues not visible on one landing alone.
Crypto teams ship dozens of localized paths wallets docs status pages and campaign landings so evaluating only the homepage misleads leadership.
The snapshot surfaces site crawl stats only when reachability succeeds first preventing noisy secondary reads while TLS failures remain unresolved.
Each crawl obeys robots.txt honors rel=nofollow by listing skipped targets without fetching them and respects runtime plus depth budgets tuned for fast turnaround.
Entry URL selection matters more than teams expect: submitting a campaign microsite yields crawl stats isolated to that hostname whereas submitting docs.example.com explores documentation subgraphs first.
Internal linking patterns influence discovery order within limits - MentionVox surfaces counts so information architects notice orphaned compliance clusters quickly.
When crawl truncates early due to caps you still retain partial evidence across templates instead of guessing blindly from one hero landing alone.
Signals MentionVox exposes
The readout summarizes scope with counts for pages fetched versus limits, lists nofollow discoveries, and explains truncation tied to policy rather than guesswork.
- Registrable domain anchor derived from your submitted URL so unrelated properties stay excluded automatically.
- Pages fetched depth indicators plus explicit messaging when limits stop expansion early.
- Nofollow links enumerated for reviewers even though they were not retrieved preserving traceability for aggressive linking strategies.
How the crawl expands in practice
Fetch succeeds on your submitted URL first then frontier expansion queues same-domain anchors discovered in HTML href attributes.
Robots directives evaluated before each deeper GET prevent wasting budget on paths engineering already blocked intentionally.
rel=nofollow destinations enqueue only into the review list - they never consume depth budget beyond listing.
When limits trigger partial completion the UI states whether time budgets depth budgets or page counts fired so owners know which knob to tune internally.
Why limits exist
Snapshots aim for roughly minute-long turnaround so crawl ceilings prevent runaway jobs across enterprise-sized exchanges accidentally.
When multi-page crawl remains disabled for infrastructure reasons the UI states that plainly rather than implying coverage you did not receive.
Caps prioritize deterministic runtime over exhaustive spidering - rerun focused snapshots on secondary URLs when architecture spans dozens of properties.
Limits also protect shared infrastructure during traffic spikes so interactive users remain unaffected while snapshot queues drain fairly.
How to interpret low fetch counts
Compare fetched pages against internal sitemap coverage manually when crawl truncates - MentionVox highlights nofollow inventory so you see intentionally suppressed edges.
If robots blocks dominate logs collaborate with whoever maintains governance templates before blaming weak GEO narratives.
Pair crawl findings with structured-data signals from the same snapshot so marketing understands whether problems are crawl breadth versus messaging depth.
Related pages
Jump between product notes without hunting the footer.
GEO for crypto and Web3
DeFi, wallets, infra - how MentionVox scores GEO for crypto brands versus generic SEO talk.
Free GEO snapshot guide
The real form flow - URL, buyer-style query, submit - and how to read each signal fast.
JSON-LD for AI search
Why Schema.org markup changes whether assistants cite your entity facts accurately.
AI crawler hygiene
robots.txt AI bot rows HTTP headers such as Content-Signal - what the snapshot hygiene panel summarizes.
Site crawl guide
Bounded same-domain crawl from your URL respecting robots and rel=nofollow trade-offs.
Full GEO audit
Crypto checkout deeper automated deliverables PDF and Markdown exports after payment.