Web crawler
internet bot that systematically browses the World Wide Web, typically for the purpose of Web indexing (web spidering) From Wikipedia, the free encyclopedia
Remove ads
A web crawler or spider is a computer program that automatically fetches the contents of a web page. The program then analyses the content, for example to index it by certain search terms. Search engines commonly use web crawlers.[1]
Related pages
- HTTrack – a web crawler released in 1998
References
Wikiwand - on
Seamless Wikipedia browsing. On steroids.
Remove ads