Robots.txt Generator
StableGenerate robots.txt file for website SEO optimization
Tool Introduction
The Robots.txt Generator provides an intuitive interface to create custom robots.txt files with visual controls. Configure search engine permissions through color-coded buttons, set crawl delays, manage restricted directories, and generate standards-compliant robots.txt files instantly.
Usage Steps
Step 1: Configure Basic Settings
- Enter your sitemap URL (optional) in the Sitemap field
- Select default robot behavior: "Allowed" or "Refused"
- Choose crawl delay: 0s, 5s, 10s, 20s, 60s, or 120s
Step 2: Set Search Engine Permissions
- Click search engine buttons to cycle through 3 states:
- Transparent (default): Follow general rules
- Green (allowed): Explicitly allow this search engine
- Red (refused): Block this search engine
- Configure 15+ search engines including Google, Baidu, Yahoo, Bing
Step 3: Add Restricted Directories
- Enter directory paths in the "Restricted Directories" field
- Click "Add" to create directory tags
- Remove unwanted directories by clicking the X on tags
Step 4: Generate and Download
- Click "Generate robots.txt" to create your file
- Preview the generated content in the display box
- Use "Copy" or "Download" buttons to save your robots.txt
Feature Highlights
- Visual three-state button controls for search engines
- Support for 15+ major search engines (Google, Baidu, Yahoo, etc.)
- Interactive restricted directory management with tags
- Configurable crawl delay settings (0s to 120s)
- Optional sitemap URL integration
- Real-time robots.txt generation and preview
- One-click copy to clipboard and file download
- Clean, intuitive user interface
Output Rules
Generated robots.txt files follow standard format with:
- Specific user-agent rules for search engines set to "allowed" or "refused"
- Default rules (User-agent: *) based on your general settings
- Disallow entries for each restricted directory you add
- Crawl-delay values when set above 0 seconds
- Sitemap URL reference when provided
- Clean, properly formatted output without comments
Best Practices
- Place robots.txt in your website's root directory
- Use specific paths rather than wildcards when possible
- Include your sitemap.xml URL in the robots.txt
- Test your robots.txt file regularly
- Keep rules simple and clear
- Avoid blocking important pages accidentally
Use Cases
- Block specific search engines while allowing others
- Protect admin and private directories from all crawlers
- Set different crawl delays for server load management
- Allow Google but block image search crawlers
- Control access to development or staging directories
- Create robots.txt for multi-language or regional sites
FAQs
What do the colored buttons mean?
Transparent buttons use default rules, green buttons explicitly allow that search engine, and red buttons block it. Click to cycle through states.
What if I don't set any search engines?
Only the default "User-agent: *" rule will be generated based on your "Default - All Robots are" setting (Allowed or Refused).
Can I add multiple restricted directories?
Yes, type each directory path and click "Add" to create tags. You can remove directories by clicking the X on each tag.
Do I need to include the sitemap?
No, the sitemap field is optional. If provided, it will be added to the end of your robots.txt file.
What happens if I set crawl delay to 0s?
No crawl-delay directive will be added to your robots.txt file when set to 0 seconds.