Answer:
Crawler search engines rely on sophisticated computer programs called spiders or crawlers that surf webpages, links, and other online content that are then stored in the search engine’s page repository
Step-by-step explanation:
A crawler consists of a program that enters sites and scans their content and other information to elaborate a list of terms for a search engine directory. The most popular search engines online contain this program, commonly known as a spider or robot. Programmers usually set crawlers to enter pages which their owners have visited before.