Robots.txt Generator
Build a robots.txt file to control search engine crawling of your website.
Crawler Rules
Generated robots.txt
Common Bot Names
Frequently Asked Questions
What is robots.txt?βΌ
robots.txt is a text file placed at the root of your website (e.g., https://example.com/robots.txt) that tells search engine crawlers which pages or directories they should or should not access. It follows the Robots Exclusion Protocol (REP).
Does robots.txt prevent indexing?βΌ
No β Disallow blocks crawling, not indexing. A page can still appear in search results if other sites link to it; Google just won't read its content. To prevent indexing, use a <meta name='robots' content='noindex'> tag or X-Robots-Tag header on the page itself.
What is Crawl-delay?βΌ
Crawl-delay tells bots to wait N seconds between requests to reduce server load. Note: Google ignores Crawl-delay in robots.txt. To control Googlebot's crawl rate, use Google Search Console's crawl rate settings instead.
Should I block my /admin directory?βΌ
Yes β blocking /admin, /wp-admin, and similar backend URLs prevents crawlers from wasting crawl budget on login pages and prevents accidental indexing of admin interfaces. This is a security best practice as well.