Final answer:
The Web Application Scanner's Crawling Hints setting can use a sitemap.xml and robots.txt to find links and directories for crawling. Sitemap.xml lists available URLs for crawling, while robots.txt can include sitemap locations and tell robots what to crawl or avoid.
Step-by-step explanation:
When configuring the Crawling Hints setting in a Web Application Scanner (WAS), the scanner can crawl all links and directories found in both the sitemap.xml and the robots.txt file. The sitemap.xml is often used to inform search engines about the URLs on a website that are available for crawling.
It is an XML file that lists the URLs for a site along with additional metadata about each URL. On the other hand, robots.txt is a text file webmasters create to instruct web robots on how to crawl pages on their website. While primarily used to tell search engines what to avoid, it can also include the locations of sitemap files and additional resources for crawling.