

But with the rise of AI, the dynamic is changing: We are observing a significant increase in request volume, with most of this traffic being driven by scraping bots collecting training data for large language models (LLMs) and other use cases. Automated requests for our content have grown exponentially, alongside the broader technology economy, via mechanisms including scraping, APIs, and bulk downloads. This expansion happened largely without sufficient attribution, which is key to drive new users to participate in the movement, and is causing a significant load on the underlying infrastructure that keeps our sites available for everyone.
- https://diff.wikimedia.org/2025/04/01/how-crawlers-impact-the-operations-of-the-wikimedia-projects/
Why can’t the world have more of this and less of… everything else that’s going on right now? 😕
I don’t really have a point, it’s just sad that humans have the capacity to do such cool, fun, creative things, and instead we’re burning the world down so computers can churn out garbage and blowing each other up because we’re different from each other.