🤖

Robots.txt Generator

Generate robots.txt files to control search engine crawler access to your website.

robots.txt generatorrobots.txt makercreate robots.txtcrawler rulessearch engine directivesSEO tool
Quick Presets
Configuration
Crawl Rules
robots.txt Preview
# robots.txt generated by Software Test Tips
# https://software-test-tips.com/tools/seo/robots-txt-generator

User-agent: *
Allow: /
Disallow: /admin/
Disallow: /private/

Sitemap: https://example.com/sitemap.xml
💡 Quick Guide
  • User-agent: Specifies which crawler the rules apply to (* means all crawlers)
  • Disallow: Tells crawlers not to access specific paths
  • Allow: Explicitly allows access (useful to override disallow rules)
  • Sitemap: Helps search engines find your sitemap.xml file
  • Crawl-delay: Number of seconds between requests (not supported by all crawlers)
  • Wildcards: Use * for any sequence and $ for end of URL

What is Robots.txt Generator?

Robots.txt Generator is a free online tool that creates robots.txt files to control how search engine crawlers access your website. It provides an intuitive interface to specify which pages or directories should be indexed, add sitemap references, and configure crawler-specific rules.

Common Use Cases

SEO Management

Control which pages search engines can crawl and index on your website.

Private Content

Block crawlers from accessing admin areas, staging environments, or sensitive directories.

Crawl Budget

Optimize crawler access to ensure important pages are prioritized.

Duplicate Content

Prevent indexing of duplicate or low-value pages that could hurt SEO.

How to Use This Tool

  1. Choose a preset (Allow All, Block All, or Common Rules) or start from scratch
  2. Add custom rules for specific user agents and paths
  3. Configure your sitemap URL and optional crawl delay
  4. Preview the generated robots.txt in real-time
  5. Copy or download the file and upload to your website root

Frequently Asked Questions

Where do I put robots.txt?
The robots.txt file must be placed in the root directory of your website (e.g., https://example.com/robots.txt). It will not work in subdirectories.
Is robots.txt a security measure?
No. Robots.txt is a request, not a restriction. Malicious bots can ignore it. Use proper authentication and access controls for sensitive content.
How long until changes take effect?
Crawlers may cache your robots.txt for up to 24 hours. Changes take effect as crawlers re-fetch the file on their next visit.
What is Crawl-delay?
Crawl-delay specifies seconds between requests. Note that Googlebot ignores this directive; use Google Search Console for crawl rate settings.