Free professional SEO tools — SERP preview, meta analyser, schema generator, robots.txt tester, and more. No login required.
A robots.txt file tells search engine crawlers which pages or directories they are allowed or forbidden to access. It's placed at the root of your domain (e.g. yourdomain.com/robots.txt). Misconfigurations — such as accidentally blocking your entire site with 'Disallow: /' — are one of the most common causes of sudden ranking drops and crawl failures.
Enter your domain into this Robots Tester tool. It fetches your live robots.txt, parses all User-agent rules and Disallow directives, and shows you exactly which bots are blocked and which URLs are affected. You can also use Google Search Console → Settings → robots.txt for real-time Google crawl status.
Generally, no — you should allow AI crawlers (GPTBot, ClaudeBot, PerplexityBot, Google-Extended) if you want your content to appear in AI-generated answers and generative search engines. Blocking them limits your GEO (Generative Engine Optimisation) visibility. Only block AI crawlers if you have specific content licensing concerns.
robots.txt controls which pages crawlers can visit. Meta robots tags (noindex, nofollow) control what crawlers do with pages they've already visited. A page blocked by robots.txt won't be crawled at all — but can still appear in search results if it has inbound links. A page with 'noindex' will be crawled but removed from the index. Use both together for full control.