Spider trap
Set of web pages that can undermine web crawlers / From Wikipedia, the free encyclopedia
For trapping real spiders, see Insect trap.
A spider trap (or crawler trap) is a set of web pages that may intentionally or unintentionally be used to cause a web crawler or search bot to make an infinite number of requests or cause a poorly constructed crawler to crash. Web crawlers are also called web spiders, from which the name is derived. Spider traps may be created to "catch" spambots or other crawlers that waste a website's bandwidth. They may also be created unintentionally by calendars that use dynamic pages with links that continually point to the next day or year.
Common techniques used are:
- creation of indefinitely deep directory structures like
http://example.com/bar/foo/bar/foo/bar/foo/bar/...
- Dynamic pages that produce an unbounded number of documents for a web crawler to follow. Examples include calendars[1] and algorithmically generated language poetry.[2]
- documents filled with many characters, crashing the lexical analyzer parsing the document.
- documents with session-id's based on required cookies.
There is no algorithm to detect all spider traps. Some classes of traps can be detected automatically, but new, unrecognized traps arise quickly.