trapping crawling bots

Access error type: NOT CRITICAL

The reason: almost all crawlers ignores the commands in robots.txt

To stop unwanted SPAM, we give all crawler bots the opportunity to identify themselves by trying to crawl the pages here. After that we trapped enough bots, selected of them will not have access rights in the future. This phase takes a little bit of time because some crawlers are changing their address pattern and we need to analyze this very carefully.

Actual 9072 entries of distinct crawlers are in our list since 31.01.2021

Sorry, the page you are looking for is currently unavailable for you.
Please try again later.

Faithfully yours, LB-lab.

If you are the system administrator of this resource then you should
check the event log for details.
Your client [ec2-34-229-223-223.compute-1.amazonaws.com] is not 😎 blacklisted

Error 451 access rights filtering phase