Bot
IsItReadyBot user-directed scan agent.
If IsItReadyBot turned up in your access logs, a human pasted your URL into isitready.dev. Every request is user-initiated, signed with Ed25519 per the Web Bot Auth IETF draft, and obeys the predictable behaviour below. This page answers 'who is this and what do they want?'
User-Agent
The exact string we send.
IsItReadyBot uses the modern hybrid pattern adopted by Googlebot, Bingbot, OAI-SearchBot, and ClaudeBot — a real Chrome user-agent envelope with our bot identifier appended after a compatible; token, plus a discovery URL pointing back to this page. The Chrome major version is kept current via a published sync script so the envelope tracks the real Chrome stable channel rather than rotting in place.
Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/148.0.0.0 Safari/537.36; compatible; IsItReadyBot/1.0; +https://isitready.dev/botMatch the substring IsItReadyBot in your access logs to identify scan traffic. We do not rotate the bot token across variants — there is exactly one IsItReadyBot.
Behaviour
What we promise about every scan.
- User-initiated only
- IsItReadyBot fires exactly once per scan a human asked for. There is no recurring crawl, no background queue, no link-walking. If your domain receives a request, somebody pasted your URL into isitready.dev and clicked Start scan.
- Single round trip per page
- A scan fetches your origin once for the home document, then a small fixed set of public discovery files (robots.txt, llms.txt, sitemap.xml, .well-known/* where relevant). It does not render JavaScript, follow internal links, or simulate user sessions.
- Self-throttles on errors
- Any 4xx or 5xx response from your origin is treated as a deliberate signal — the scan reports it, caches the failure briefly, and stops fetching. Sustained errors are not retried in tight loops.
- Honest identity, never spoofed
- The User-Agent always contains "IsItReadyBot/1.0" and a discovery URL pointing back to this page. We never imitate Googlebot, ClaudeBot, GPTBot, or any other crawler. Spoofing established bots is something we audit other sites against, not something we do.
- No personal data collected
- We read public HTTP responses only — headers, HTML head, robots.txt, llms.txt, sitemap, structured-data blocks. No form submission, no cookie acceptance, no authenticated areas. The scanner has no credentials to your site and never will.
- Cached findings, short-lived
- Successful reports are cached for a few hours so a refresh is cheap; recent failures are cached for a few minutes so we do not hammer a struggling origin. Cache windows are documented in our methodology and obey the principle that fresh evidence beats stale evidence.
Robots.txt
Opt out, throttle, or explicitly allow.
IsItReadyBot is user-initiated, so most robots.txt directives do not formally bind a scan triggered by a human visiting our site. We honour them anyway as a courtesy — if your robots.txt disallows our token, our scanner reports a notice and skips fetching. If you want to make the policy explicit either way, use the standard syntax:
To block IsItReadyBot:
User-agent: IsItReadyBot
Disallow: /To explicitly allow (the default):
User-agent: IsItReadyBot
Allow: /Why a browser-shaped User-Agent
To audit honestly, we have to render honestly.
isitready.dev measures how a modern AI agent or search crawler will experience your site. Almost every meaningful crawler on the web in 2026 — GPTBot, ClaudeBot, OAI-SearchBot, Googlebot, Bingbot, PerplexityBot — uses a real Chrome envelope with their bot identifier appended. We mirror that shape so the audit reflects what your visitors will actually see, while keeping an unambiguous self-identification token so sysadmins can grep us out of logs in one regex.
We never strip the bot identifier to disguise the scan as an ordinary browser. Audit results are only useful if the audit is honest about who ran it.
Classification
Signed Agent, not crawler.
Per Cloudflare's bot taxonomy, IsItReadyBot is a Signed Agent — a request dispatched by an end user's explicit action, not by a background crawler our team operates. The same category covers ChatGPT-User and Claude-User: a human asks, the agent fetches once, the loop ends. There is no scheduler running on our side that could ever request your domain unprompted.
Verification
How to confirm a request is really us.
IsItReadyBot supports Web Bot Auth — the IETF draft built on RFC 9421 HTTP Message Signatures. Outbound scan fetches carry a Signature, Signature-Input, and Signature-Agent header. The signature is verifiable against our public Ed25519 key, published as a JWKS document at /.well-known/http-message-signatures-directory. Cloudflare, AWS WAF, Akamai, and Stytch already verify these signatures at the edge. If your stack does too, a signed IsItReadyBot request can pass verified-bot rules without an IP allowlist.
Lower-tech checks still work: match IsItReadyBot/1.0 in the User-Agent and confirm the discovery URL points to https://isitready.dev/bot. Worker egress comes from Cloudflare's shared Workers IP pool, so IP allowlisting is not a reliable verification method — legitimate traffic and Cloudflare-routed traffic generally are not separable by IP alone.
A machine-readable summary of all the above is available at /bot.json — bot identity, behaviour, contact, and Web Bot Auth status.
Contact
Questions, blocks, complaints.
If our scanner misbehaved on your site, or you want to ask why it showed up at all, email abuse@kordu.gg — please include the affected domain so we can correlate quickly. For security disclosures use security@kordu.gg. We respond to every legitimate report.