this post was submitted on 24 Oct 2024
85 points (100.0% liked)

Asklemmy

43731 readers
1232 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS
 

With everything that's happening there, I was wondering if it was possible. Obviously their size is massive, but I'm sure there's a ton of duplicated stuff. Also some things are more important to preserve than others, and some things are preserved elsewhere (Anna's Archive, Libgen, and Z-Lib come to mind that could preserve books if the IA disappeared).

But how could things get archived from the IA (assuming it's possible) on both a personal level (aka I want to grab a copy of that wayback snapshot) and on a more wide scale community level? Are there people already working on it? If not, what would be the best theoretical way to get started?

And what are the most important things in your opinion that should be prioritized if the IA was about to disappear and we only had so much time and storage to utilize?

top 20 comments
sorted by: hot top controversial new old
[–] loppwn@sh.itjust.works 37 points 6 days ago (1 children)

the „archiveteam“ tried this, but it failed, you can read about it here:

https://wiki.archiveteam.org/index.php/INTERNETARCHIVE.BAK

[–] nitefox@sh.itjust.works 14 points 6 days ago (1 children)

What are the conclusions of the research? Why was it shut down?

[–] bizarroland@fedia.io 13 points 6 days ago (2 children)

I mean unless you're sitting on an exabyte of spare storage you don't know what to do with it's a pretty hefty undertaking.

[–] Anivia@feddit.org 6 points 5 days ago

Divide it up into torrent files of a reasonable size and have the community seed them, everyone helping as much as they can/want. You could even make a custom torrent client that automatically chooses the least healthy torrents on the network to download and seed

[–] nitefox@sh.itjust.works 3 points 5 days ago

Obviously but so if the current storage gets corrupted /destroyed, there is no way to restore all that data?

[–] SubArcticTundra@lemmy.ml 30 points 6 days ago (1 children)

I think that something like the internet archive – where the body of data is too large and important to store in one place – is where using a federated framework similar to Lemmy might make a lot of sense. What’s more, there are many different organisations which have the incentive to archive their own little slice of the internet (but not those of others), and a federated model would help in linking these up into one easily navigable, and inherently crowd-funded, whole.

[–] Takumidesh@lemmy.world 11 points 6 days ago (1 children)

Why federated and not just regular p2p?

Internet archive already supports torrents.

[–] JusticeForPorygon@lemmy.world 7 points 6 days ago (2 children)

Forgive me because I'm not very familiar with the technology, but 99 petabytes (estimated size of the Internet Archive) seems like a little much for even a large network of home computers.

Don't get me wrong, decentralizing would be great, but I just don't understand how it would be done at this level, especially when, in the grand scheme of things, I don't think there's a whole lot of people who would pitch in.

[–] Takumidesh@lemmy.world 1 points 2 days ago

Each person doesn't need to host everything.

The Internet archive already has torrents that get automatically created, you can right now go and download/seed torrents for some items and you are immediately doing your part in decentralizing the Internet archive.

[–] Anivia@feddit.org 5 points 5 days ago

99 petabytes is not that much really, my NAS has a quarter petabyte of storage, some of which I can spare. This is something that just a few thousand volunteers could manage realistically

[–] JusticeForPorygon@lemmy.world 21 points 6 days ago (1 children)

The Internet Archive is supposedly over 99 petabytes in size. That's an unfathomable amount of data.

[–] twei@discuss.tchncs.de 12 points 6 days ago (1 children)

I think it's actually about 150 PB of data that's then also georedundantly stored in the US and Netherlands. That sounds like a lot, but I think it would be possible to distribute that amount of data

[–] Xiisadaddy@lemmygrad.ml 3 points 5 days ago

its possible but would require funding and lots of it to maintain.

[–] chaos@beehaw.org 11 points 6 days ago

Archive Team looked at this about 10 years ago and found it basically impossible. It was around 14 petabytes of information to fetch, organize, and distribute at the time.

https://wiki.archiveteam.org/index.php/INTERNETARCHIVE.BAK

[–] PoorPocketsMcNewHold@lemmy.ml 4 points 5 days ago

https://github.com/internetarchive/dweb-mirror They've been supporting dweb solutions for years. Evn if they haven't enabled back their public dweb.archive.org portal.

[–] davel@lemmy.ml 10 points 6 days ago

As others have and will say, it’s an enormous body of content. And this has sparked a shower thought.

What about not trying to be a full, perfect backup, but instead a “best effort”/“better than no backup at all” shoestring budget backup? What about triage backup? What about stripped-down markup? What about lossy text compression?

[–] ganymede@lemmy.ml 7 points 6 days ago* (last edited 6 days ago)

its BIG. could be great to see some different teams tackle different issues.

for example a transcode team to tag and convert different media to the latest efficient formats might save alot of space.

and eg. voice-only recordings could be suitably encoded vs music etc

also some methods for diffing snapshots, or some kind of compromise on snapshots storage with minimal changes? not ideal but might be enough to get across the line maybe?

re. the "most important", aside from specific items or archives, imo a crucial role might be text-only snapshots of most of the web. would help increase accountability amongst modern media outlets, journalists etc

[–] Dubois_arache@lemmy.blahaj.zone 4 points 6 days ago (1 children)

just torrent everything and create little p2p servers :P

[–] PoorPocketsMcNewHold@lemmy.ml 3 points 5 days ago

The Archiving group The-Eye did actually made a back up of the Archive torrents. https://the-eye.eu/public/Random/archive.org_dumps/torrents/ They have a text file listing the file list of all the collexted torrents. It's a text file. That they had to compress. And it's still around 800Mo big just for that one.

[–] acabjones@lemmygrad.ml 3 points 6 days ago

Imo it's probably not a good idea for another single entity to hold a copy of IA's corpus. IA already operates on a shoestring but it still is expensive and labor intensive to operate, which requires an endowment or constant source of funding, both of which come with political entanglements. I just don't think one org can be indefinite custodians of something so valuable.

A distributed technical solution may eventually be developed which enabled regular people to participate in storing and maintaining the corpus. I think IPFS was supposed to be this kind of solution but seems like the tech isn't capable or mature enough (Anna's archive abandoned IPFS for technical reasons and that's a far smaller corpus). BTW IA has engagement with the dweb community and are interested in finding distributed solutions for storage of IA's corpus.