crawls a number of websites with its indexing agent VisionNetBot, also known as a spider. The spider is like an internet user surfing the web. It moves from page to page indexing the content it finds.

VisionNetBot obeys the robots.txt standard. robots.txt is a file that you place in your web server's root html directory that tells search engines what pages you do not want indexed. This page [external link] talks more about it.