*  Doing a HEAD on an ISMAP link causes a 500 server error. Maybe a 
   GET will be better for this.

*  Add a mechanism for entering authentication data in order to check
   such areas as well. Possibly allow multiple pairs. Rather not enter
   the passwords on the commandline.

*  Obey Robot rules. My current idea is to obey robot rules by default
   on all 'external' request, and ignore them on 'local'
   requests. Both of these should be changeable.

*  Make it possible to convert http:// references to file://
   references, so that a local server can be checked without going
   through the WWW server.

*  Keep state between runs, but make sure we still are able to run
   Checkbot on several areas (concurrently). Uses for state
   information: list of consistent bad host, remembering previous bad
   links and just check those with a `quick' option, report on hosts
   which keep timing out.

*  Retry time-out problem links after checking all other links to
   better deal with transient problems. See above as well.

*  Parse client-side (and server-side) MAP's if possible.

*  Maybe use a Netscape feature to open problem links in a new
   browser, so that the problem links page remains visible and
   available. Frames? (*shudder*)

*  Include (or link to) a page which contains explanations for the
   different error messages. (But watch out for server-specific
   messages, if any)

*  The external link count is way off. Write code to parse the
   external queue first, and then run through it to actually check the
   links. 

*  Add an option to indicate which error codes can be ignored by
   Checkbot's reports.

*  Keep an internal list of hosts to which we cannot connect, so that
   we avoid being stalled a while for each link to that host.

*  The exclude option is somewhat confusing, in that the links
   matching this option will still be checked (the links on these pages
   will really be excluded). Maybe add a new option to really ignore
   options? Or just redo current options.

*  Add an option to count hops instead of using match, and only hop
   that many links away? Suggested for single page checking, but might
   be useful on a larger scale as well? Yes, for instance against
   serves that create recursive symlinks by accident.

*  Sort problems on the server page in a different order
   (e.g. critical errors first).

*  Improve the reporting pages to be more clear about what is going
   on, etc.

*  Using IP address in reports can conflict with HTTP 1.1 virtual
   hosting. Also, in the current situation several reports will be
   generated when a server has several names for the same IP address,
   or when the same content is served through two web servers. The
   latter situation could be solved by adding an option which would
   indicate these are actually the same:
   www1.domain=www2.domain=www3.domain. 

*  Retry pages that generate mysterious 500-series errors or 501 Not
   Implemented errors with a GET command.

*  Get Checkbot listed as a module, so that it can be installed with
   CPAN?
