Web crawler

From Wikipedia, the free encyclopedia

Remove ads

A web crawler or spider is a computer program that automatically fetches the contents of a web page. The program then analyses the content, for example to index it by certain search terms. Search engines commonly use web crawlers.[1]

  • HTTrack – a web crawler released in 1998

References

Loading related searches...

Wikiwand - on

Seamless Wikipedia browsing. On steroids.

Remove ads