AI bots strain Wikimedia as bandwidth surges 50%
Crawlers that evade detection
Making the situation more difficult, many AI-focused crawlers do not play by established rules. Some ignore robots.txt directives. Others spoof browser user agents to disguise themselves as human visitors. Some even rotate through residential IP addresses to avoid blocking, tactics that have become common enough to force individual developers like Xe Iaso to adopt drastic protective measures for their code repositories.
This leaves Wikimedia’s Site Reliability team ...
Read more at arstechnica.com