1059
AI companies are violating a basic social contract of the web and and ignoring robots.txt
(www.theverge.com)
This is a most excellent place for technology news and articles.
Put something in robots.txt that isn't supposed to be hit and is hard to hit by non-robots. Log and ban all IPs that hit it.
Imperfect, but can't think of a better solution.
Good old honeytrap. I'm not sure, but I think that it's doable.
Have a honeytrap page somewhere in your website. Make sure that legit users won't access it. Disallow crawling the honeytrap page through robots.txt.
Then if some crawler still accesses it, you could record+ban it as you said... or you could be even nastier and let it do so. Fill the honeytrap page with poison - nonsensical text that would look like something that humans would write.
I think I used to do something similar with email spam traps. Not sure if it's still around but basically you could help build NaCL lists by posting an email address on your website somewhere that was visible in the source code but not visible to normal users, like in a div that was way on the left side of the screen.
Anyway, spammers that do regular expression searches for email addresses would email it and get their IPs added to naughty lists.
I'd love to see something similar with robots.
Yup, it's the same approach as email spam traps. Except the naughty list, but... holy fuck a shareable bot IP list is an amazing addition, it would increase the damage to those web crawling businesses.
but with all of the cloud resources now, you can switch through IP addresses without any trouble. hell, you could just browse by IP6 and not even worry with how cheap those are!
Yeah, that throws a monkey wrench into the idea. That's a shame, because "either respect robots.txt or you're denied access to a lot of websites!" is appealing.
That's when Google's browser DRM thing starts sounding like a good idea 😭
Even better. Build a WordPress plugin to do this.
I’m the idiot human that digs through robots.txt and the site map to see things that aren’t normally accessible by an end user.
For banning: I'm not sure but I don't think so. It seems to me that prefetching behaviour is dictated by a page linking another, to avoid any issue all that the site owner needs to do is to not prefetch links for the honeytrap.
For poisoning: I'm fairly certain that it doesn't. At most you'd prefetch a page full of rubbish.
"Help, my website no longer shows up in Google!"
Yeah, this is a pretty classic honeypot method. Basically make something available but inaccessible to the normal user. Then you know anyone who accesses it is not a normal user.
I’ve even seen this done with Steam achievements before; There was a hidden game achievement which was only available via hacking. So anyone who used hacks immediately outed themselves with a rare achievement that was visible on their profile.
That’s a bit annoying as it means you can’t 100% the game as there will always be one achievement you can’t get.
perhaps not every game is meant to be 100% completed
There are tools that just flag you as having gotten an achievement on Steam, you don't even have to have the game open to do it. I'd hardly call that 'hacking'.
Better yet, point the crawler to a massive text file of almost but not quite grammatically correct garbage to poison the model. Something it will recognize as language and internalize, but severely degrade the quality of its output.
Maybe one of the lorem ipsum generators could help.
a bad-bot .htaccess trap.
robots.txt is purely textual; you can't run JavaScript or log anything. Plus, one who doesn't intend to follow robots.txt wouldn't query it.
If it doesn't get queried that's the fault of the webscraper. You don't need JS built into the robots.txt file either. Just add some line like:
Any client that hits that page (and maybe doesn't pass a captcha check) gets banned. Or even better, they get a long stream of nonsense.
server {
name herebedragons.example.com; root /dev/random;
}
Nice idea! Better use
/dev/urandom
through, as that is non blocking. See here.That was really interesting. I always used urandom by practice and wondered what the difference was.
I wonder if Nginx would just load random into memory until the kernel OOM kills it.
I actually love the data-poisoning approach. I think that sort of strategy is going to be an unfortunately necessary part of the future of the web.
You're second point is a good one, but you absolutely can log the IP which requested robots.txt. That's just a standard part of any http server ever, no JavaScript needed.
You'd probably have to go out of your way to avoid logging this. I've always seen such logs enabled by default when setting up web servers.
People not intending to follow it is the real reason not to bother, but it's trivial to track who downloaded the file and then hit something they were asked not to.
Like, 10 minutes work to do right. You don't need js to do it at all.