The reason: almost all crawlers ignores the commands in robots.txt
To stop unwanted SPAM, we give all crawler bots the opportunity to identify themselves by trying to crawl the pages here. After that we trapped enough bots, selected of them will not have access rights in the future. This phase takes a little bit of time because some crawlers are changing their address pattern and we need to analyze this very carefully.
Actual 9071 entries of distinct crawlers are in our list since 31.01.2021
Sorry, the page you are looking for is currently unavailable for you.
Please try again later.
Faithfully yours, LB-lab.