Point it at your site
Paste the public origin. The scanner fetches llms.txt, robots.txt, sitemap.xml, and a handful of HTML pages to compare signals.
Focused tool
A focused pass over the files AI assistants look for first — /llms.txt, /llms-full.txt, robots policy, sitemap alignment, and markdown negotiation — powered by the same read-only scan behind every isitready.dev report.
What gets checked
The scan reads the discovery files first, then confirms they agree with the HTML and structured data on the canonical origin.
/llms.txtindexA short, agent-readable map of the surfaces an AI assistant should inspect first — docs, tools, policies, and canonical URLs.
/llms-full.txtdetailThe long-form variant with link summaries and per-page context. We check whether it exists and whether it agrees with the short index.
/robots.txtai crawler policyHow robots.txt treats GPTBot, ClaudeBot, Google-Extended, and friends — and whether the policy lets the discovery paths through.
/sitemap.xmlalignmentWhether sitemap.xml and the llms.txt index point at the same canonical URLs, so discovery sources agree on what matters.
Accept: text/markdowncontent negotiationWhether pages return clean markdown via Accept headers or .md variants — the cheapest quote-ready context for assistants.
<link rel="canonical">metadataCanonical URLs, title, and description agreement across HTML head, sitemap, and structured data so signals stop contradicting.
How it runs
Paste the public origin. The scanner fetches llms.txt, robots.txt, sitemap.xml, and a handful of HTML pages to compare signals.
The AI Readiness section of the report surfaces each file, its evidence row, and whether the discovery path resolves cleanly.
Copy the priority fixes or agent-ready prompts into your tracker — no guessing what to change, no invented metrics.
Background reading
Ready when you are
The full report surfaces every llms.txt-relevant signal alongside the rest of the AI readiness, SEO, security, and performance evidence — same read-only flow, no setup.