Mozilla/5.0 (compatible; LinksIndexerBot/1.0; +http://linksindexer.com/bot)
4 Parallel Requests at a Time
A bot (also: a web crawler, spider) is a computer program that browses the Web in a methodical and automated manner to gather information, which can be a part of how some services work (for example search engines like Google).
LinksIndexerBot is our web crawler which is a very important tool in our sitemap campaigns - since our service is a website urls indexing and crawling, we need to automatically parse third-party sites to verify their urls and status. It is an indispensable part of our technology that aggregates site data into a brief url profile for every site - that usually includes querying your site for metadata, favicons and making a screenshot of its homepage and some other actions. Such site profiles support the way our service works, so in order to rebuff all concerns we need to say that LinksIndexerBot never harvests any e-mail addresses or content that is not related to sitemap campaigns.
We want our crawler to be as 'polite' as possible (sending a minimal number of queries to your site) but in case it causes you any issues, please let us know via contact form and provide us with all info that might appear helpful.
Our crawler will obey any standard-conforming rule you provide in your robots.txt file. To disallow LinksIndexerBot visiting and parsing your site, you can put the following lines into your robots.txt file:
User-agent: LinksIndexerBot Disallow: /
This will result in our crawler visiting your site only once and not returning anytime soon (accessing just this one file to execute your robots policy). LinksIndexerBot generally obeys such robots.txt directives as: allow, disallow, crawl-delay, host. You may look for specifications of these directives on http://www.robotstxt.org.
If LinksIndexerBot visits your website too frequently and ignores the robot.txt commands, please contact us.