require WWW::RobotRules; my $robotsrules = new WWW::RobotRules 'MOMspider/1.0';
use LWP::Simple qw(get);
$url = "http://some.place/robots.txt"; my $robots_txt = get $url; $robotsrules->parse($url, $robots_txt);
$url = "http://some.other.place/robots.txt"; my $robots_txt = get $url; $robotsrules->parse($url, $robots_txt);
# Now we are able to check if a URL is valid for those servers that # we have obtained and parsed "robots.txt" files for. if($robotsrules->allowed($url)) { $c = get $url; ... }
The parsed file is kept in the WWW::RobotRules object, and this object provide methods to check if access to a given URL is prohibited. The same WWW::RobotRules object can parse multiple robots.txt files.
new()
is the name of the robot.
parse()
method takes as arguments the
URL that was used to retrieve the /robots.txt file, and the contents of the file.
The file consists of one or more records separated by one or more blank lines. Each record contains lines of the form
<field-name>: <value>
The field name is case insensitive. Text after the '#' character on a line is ignored during parsing. This is used for comments. The following <field-names> can be used:
# robots.txt for http://www.site.com/
User-agent: * Disallow: /cyberworld/map/ # This is an infinite virtual URL space Disallow: /tmp/ # these will soon disappear
This example ``/robots.txt'' file specifies that no robots should visit any URL starting with ``/cyberworld/map/'', except the robot called ``cybermapper'':
# robots.txt for http://www.site.com/
User-agent: * Disallow: /cyberworld/map/ # This is an infinite virtual URL space
# Cybermapper knows where to go. User-agent: cybermapper Disallow:
This example indicates that no robots should visit this site further:
# go away User-agent: * Disallow: /