this post was submitted on 14 Feb 2024
1059 points (98.6% liked)

Technology

59666 readers
2625 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] i_have_no_enemies@lemmy.world 9 points 9 months ago (2 children)
[–] wise_pancake@lemmy.ca 57 points 9 months ago* (last edited 9 months ago)

robots.txt is a file available in a standard location on web servers (example.com/robots.txt) which set guidelines for how scrapers should behave.

That can range from saying "don't bother indexing the login page" to "Googlebot go away".

IT's also in the first paragraph of the article.

[–] mrnarwall@lemmy.world 17 points 9 months ago (1 children)

Robots.txt is a file that is is accessible as part of an http request. It's a backend configuration file that sets rules for what automatically running web crawlers are allowed. It can set both who is and who isn't allowed. Google is usually the most widely allowed domain for bots just because their crawler is how they find websites for search results. But it's basically the honor system. You could write a scraper today that goes to websites that it is being told it doesn't have permission to view this page, ignore it, and still get the information

[–] echodot@feddit.uk 5 points 9 months ago* (last edited 9 months ago)

I do not think it is even part of the HTTP protocol I think it's just a pseudo add-on. It's barely even a protocol it's basically just a page that bots can look at with no really pre-agreed syntax.

If you want to make a bot that doesn't respect robots.txt you don't even need to do anything complicated, you just need to not include the requirement to look at the page. It's not enforceable at all.