The robot exclusion standard, also known as the Robots Exclusion Protocol or robots.txt protocol, is a convention to prevent cooperating web spiders and other web robots from accessing all or part of a website which is otherwise publicly viewable. Robots are often used by search engines to categorize and archive web sites, or by webmasters to proofread source code. The standard complements Sitemaps, a robot inclusion standard for websites.
For websites with multiple sub-domains, each sub-domain must have its own robots.txt file. If example.com had a robots.txt file but a.example.com did not, the rules that would apply for example.com will not apply to a.example.com.
There is no official standards body or RFC for the robots.txt protocol. It was created by consensus in June 1994 by members of the robots mailing list (email@example.com). The information specifying the parts that should not be accessed is specified in a file called robots.txt in the top-level directory of the website. The robots.txt patterns are matched by simple substring comparisons, so care should be taken to make sure that patterns matching directories have the final '/' character appended, otherwise all files with names starting with that substring will match, rather than just those in the directory intended.
This example keeps all robots out:
The next is an example that tells all crawlers not to enter four directories of a website:
Example that tells a specific crawler not to enter one specific directory:
Example that tells all crawlers not to enter one specific file:
Note that all other files in the specified directory will be processed.
Example demonstrating how comments can be used:
as this is not a stable standard extension.
Instead: should be used.
Crawl-delayparameter, set to the number of seconds to wait between successive requests to the same server:
Some major crawlers support an
Allow directive which can counteract a following
Disallow directive. This is useful when you disallow an entire directory but still want some HTML documents in that directory crawled and indexed. It should be noted that while by standard the first matching robots.txt pattern always wins, Google's implementation differs in that it first evaluates all Allow patterns and only then all Disallow patterns. Yet, in order to be compatible to all robots, if you want to allow single files inside an otherwise disallowed directory, you need to place the Allow directive(s) first, followed by the Disallow, for example:
This example will Disallow anything in /folder1/ except /folder1/myfile.html, since the latter will match first. In case of Google, though, the order is not important.
The first version of the Robot Exclusion standard does not mention anything about the "*" character in the
Disallow: statement. Modern crawlers like Googlebot and Slurp recognize strings containing "*", while MSNbot and Teoma interpret it in different ways.