Answer:
crawler
Step-by-step explanation:
A crawler (also known as spider or robot) is a component of search engine that indexes web sites automatically. It's main purpose is to systematically browses the World Wide Web, typically for the purpose of Web indexing.
It does this by visiting a list of linkss, as it does this it identifies all the hyperlinks found in the web pages and copies and saves them to the list of links to visit.