this post was submitted on 14 Jun 2024
33 points (94.6% liked)

Selfhosted

40382 readers
448 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Hello everyone,

In a day or two, I am getting a motherboard with an N100 integrated CPU as a replacement to the Raspberry Pi 4 (2 GB Model). I want to run Jellyfin, the *arr stack and Immich on it. However, I have a lot of photos(for Immich) and movies(for Jellyfin) (in total about 400 GB) that I want to back up, just in case something happens. I have two 1TB drives, one will have the original files, and the second will be my boot drive and have the backup files.

How can I do that? Just copy the files? Do I need to compress them first? What tools do I need to use, and how would you do it?

Thanks in advance.

EDIT: I forgot to mention that I would prefer the backups to be local.

top 17 comments
sorted by: hot top controversial new old
[–] scrubbles@poptalk.scrubbles.tech 9 points 5 months ago

rclone is my go to for backups I run regularly. It is very nice and scriptable.

rsync might be what you're looking for, a bit more verbose and... determined? for a large job like that.

[–] pe1uca@lemmy.pe1uca.dev 6 points 5 months ago* (last edited 5 months ago) (1 children)

For local backups I use this command

$ rsync --update -ahr --no-i-r --info=progress2 /source /dest

You could first compress them, but since I have the space for the important stuff, this is the only command I need.

Recently I also made a migration similar to yours.

I've read jellyfin is hard to migrate, so I just reinstalled it and manually recreated the libraries, I didn't mind about the watch history and other stuff.
IIRC there's a post or github repo with a script to try to migrate jellyfin.

For immich you just have to copy this database files with the same command above and that's it (of course with the stack down, you don't want to copy db files while the database is running).
For the library I already had it in an external drive with a symlink, so I just had to mount it in the new machine and create a simlar symlink.

I don't run any *arr so I don't know how they'd be handled.
But I did do the migrarion of syncthing and duplicati.
For syncthing I just had to find the config path and I copied it with the same command above.
(You might need to run chown in the new machine).

For duplicati it was easier since it provides a way to export and import the configurations.

So depending on how the *arr programs handle their files it can be as easy as find their root directory and rsync it.
Maybe this could also be done for jellyfin.
Of course be sure to look for all config folders they need, some programs might split them into their working directory, into ~/.config, or ./.local, or /etc, or any other custom path.

EDIT: for jellyfin data, evaluate how hard to find is, it might be difficult, but if it's possible it doesn't require the same level of backups as your immich data, because immich normally holds data you created and can't be found anywhere else.

Most series I have them in just the main jellyfin drive.
But immich is backedup with 3-2-1, 3 copies of the data (I actually have 4), in at least 2 types of media (HDD and SSD), with 1 being offsite (rclone encrypted into e2 drive)

[–] VitabytesDev@feddit.nl 1 points 5 months ago (1 children)

Thanks for responding. I actually don't have Immich yet on the Raspberry Pi, so it's the first time I will be installing it and then importing the photos. I don't actually care a lot about the migration, since I can just reconfigure the services. I want to ensure that if a drive fails, I can restore the data. I would try RAID, but I read that "RAID is not backup". Or I could just run the command you provided in a cronjob.

[–] pe1uca@lemmy.pe1uca.dev 2 points 5 months ago

In that case I'd recommen you use immich-go to upload them and still backup only immich instead of your original folder, since if something happens to your immich library you'd have to manually recreate it because immich doesn't update its db from the file system.
There was a discussion in github about worries of data being compressed in immich, but it was clarified the uploaded files are saved as they are and only copies are modified, so you can safely backup its library.

I'm not familiar with RAID, but yeah, I've also read its mostly about up time.

I'd also recommend you look at restic and duplocati.
Both are backup tools, restic is a CLI and duplocati is a service with an ui.
So if you want to create the crons go for restic.
Tho if you want to be able to read your backups manually maybe check how the data is stored, because I'm using duplicati and it saves it in files that need to be read by duplicati, I'm not sure if I could go and easily open them unlike the data copied with rsync.

[–] mlaga97@lemmy.mlaga97.space 5 points 5 months ago (1 children)

Restic and borg are both sorta considered 'standard' for doing incremental backups beyond filesystem snapshotting.

I use restic and it automatically handles stuff like snapshotting, compression, deduplication, and encryption for you.

[–] kylian0087@lemmy.dbzer0.com 2 points 5 months ago (2 children)

For automated backups defiantly. For a one time use I often use just rsync. It is the simplest to quickly use.

[–] mlaga97@lemmy.mlaga97.space 3 points 5 months ago

I do find rclone to be a bit more comprehensible for that purpose, rsync always makes me feel like I'm in https://xkcd.com/1168/

[–] lemmyvore@feddit.nl 1 points 5 months ago

If you literally mean one time then rsync is fine-ish... if you combine it with a checksum tool so you can verify it copied everything properly.

If you need to backup regularly then you need something that can do deduplication, error checking, compression, probably encryption too. Rsync won't cut it, unless you mean to cover each of those points using a different tool. But there are tools like borg that can do all of them.

[–] r0ertel@lemmy.world 5 points 5 months ago

Restic and Borg seem to be the current favorites, but I really like the power and flexibility of Duplicity. I like that I can push to a wide variety of back ends (I'm using the rsync), it can do synchronous or asynchronous encryptions and I like that it can do incremental with timed full backups. I don't like that it keeps a local cache of index files.

I back up to a Pi 0 with a big local disk and rsync the whole disk to another Pi at a relative's house over tailscale. I've never needed the remote, but it's there.

I've had to do a single directory restore once and it was pretty easy. I was able to restore to a new directory and move only the files that I clobbered.

[–] themachine@lemmy.world 4 points 5 months ago

I prefer restic for my backups. There's nothing inherently wrong with just making a copy if that is sufficient for you though. Restic will create small point in time snapshots as compared to just a file copy so I'm the event that perhaps you made a mistake and accidentally deleted something from the "live" copy and managed to propagate that to your backup it is a nonissue as you could simply restore from a previous snapshot.

These snapshots can also be compressed and deduplicated making them extremely space efficient.

[–] CaptDust@sh.itjust.works 3 points 5 months ago

I use rsync with a systemd timer. When I first installed the backup drive it took a while to build the file system, but now every Monday it runs, finds the difference between source and target drive, and pulls just the changes down for backup. It's pretty quick, doesn't do any compression or anything like that.

[–] colifloro@lemmy.world 2 points 5 months ago

I'd try to stick to the official recommendations to backup immich: https://immich.app/docs/administration/backup-and-restore/

[–] Andrzej@lemmy.myserv.one 1 points 5 months ago (1 children)

If you have the RAM for it, I would recommend going the Promox route. I made the switch this year, and now running daily container image backups is a doddle.

[–] VitabytesDev@feddit.nl 1 points 5 months ago (1 children)

Can you explain more the setup? What VMs would I need to run?

[–] Andrzej@lemmy.myserv.one 2 points 5 months ago

Sure. The hardware is a cheap little beelink with an n100 and 16gb of RAM. Proxmox can do VMs, but is primarily focused on LXCs, which are Linux containers. They share the kernel with the host, so they're very lightweight — you can spin up basically as many (say) Debian systems as you want. So I have Jellyfin on one container, Sonarr/Radarr on another (though you could put them on separate containers if you wanted), transmission has a container, sabnzb has a co- ... you get the idea lol.

The cool thing is that it's easy to mount drives/directories from the host, and have your containers share them that way.

Wrt backups, Proxmox had some built in functionality you can run from the web ui. So I back up images of the LXCs to the external hard drive daily, then have a borg container that backs up the back up directory to cloud storage.

It's also very convenient to make a quick backup before making any changes to a container — you can restore to a previous image with the click of a button.

[–] Decronym@lemmy.decronym.xyz 1 points 5 months ago* (last edited 5 months ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
LXC Linux Containers
RAID Redundant Array of Independent Disks for mass storage
SSD Solid State Drive mass storage

3 acronyms in this thread; the most compressed thread commented on today has 12 acronyms.

[Thread #804 for this sub, first seen 15th Jun 2024, 12:25] [FAQ] [Full list] [Contact] [Source code]

[–] geography082@lemm.ee 0 points 5 months ago

Rclone crypt, cron, whatever cheap cloud or even free service.