Answer:
d. crawlers
Step-by-step explanation:
Crawlers or spider web
It is an automated software that systematically inspects and tracks all Internet pages to index and position them in the search engine.
The Web Spider starts with a few initial URL’s, called web seed and goes discovering the web pages linked to our website and tracking them.
Collect and add URLs to the list to process and index them later.
The Cwarler uses the robots.txt file and the meta tag to collect the information that the owner of the Web has left, such as pages he wants me to ignore, non-index pages.
Track content and hyperlinks. With this information, create a structure of your website with all the web pages that exist.