Check robots.txt instantly. Verify robots.txt file and analyze crawler directives to ensure proper search engine access control.
• Located at /robots.txt on your domain
• Controls search engine crawler access
• Use Disallow to block paths
• Include Sitemap location
robots.txt is a file that tells search engine crawlers which pages or sections of a website they can or cannot access. It's located at the root of a website (e.g., example.com/robots.txt) and uses simple directives to control crawler behavior. robots.txt helps manage crawler traffic and protect sensitive areas of your website.
Our free Robots.txt Checker fetches and analyzes robots.txt files. It extracts user agents, disallow rules, allow rules, and sitemap locations. This helps verify that your robots.txt is configured correctly and accessible to search engines.
User-agent: * Disallow: /admin/ Disallow: /private/ Allow: /public/ Sitemap: https://example.com/sitemap.xml
robots.txt must be at the root of your domain: https://example.com/robots.txt
robots.txt is a request, not a command. Well-behaved crawlers respect it, but it's not a security measure.
Yes, you can use * for user-agent (all crawlers) and some patterns in paths, though support varies.
Yes, including sitemap location in robots.txt helps search engines discover your sitemap.