this post was submitted on 19 Nov 2023
24 points (96.2% liked)

Selfhosted

40347 readers
354 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

Black friday is almost upon us and I'm itching to get some good deals on missing hardware for my setup.

My boot drive will also be VM storage and reside on two 1TB NVMe drives in a ZFS mirror. I plan on adding another SATA SSD for data storage. I can't add more storage right now, as my M90q can't be expanded easily.

Now, how would I best setup my storage? I have two ideas and could use some guidance. I want some NAS storage for documents, files, videos, backups etc. I also need storage for my VMs, namely Nextcloud and Jellyfin. I don't want to waste NVMe space, so this would go on the SATA SSD as well.

  1. Pass the SSD to a VM running some NAS OS (OpenMediaVault, TrueNas, simple Samba). I'd then set up different NFS/samba shares for my needs. Jellyfin or Nextcloud would rely on the NFS share for their storage needs. Is that even possible and if so, a good idea? I could easily access all files, if needed. I don't now if there would be a problem with permissions or diminished read/write speeds, especially since there are a lot of small files on my nextcloud.

  2. I split the SSD, pass one partition to my NAS and the other will be used by Proxmox to store virtual disks for my VMs. This is probably the cleanest, but I can't easily resize the partitions later.

What do you think? I'd love to hear your thoughts on this!

all 22 comments
sorted by: hot top controversial new old
[–] tvcvt@lemmy.ml 10 points 1 year ago (1 children)

How about option 3: let Proxmox manage the storage and don’t set up anything that requires drive pass through.

TrueNAS and OMV are great, and I went that same VM NAS route when I first started setting things up many years ago. It’s totally robust and doable, but it also is a pretty inefficient way to use storage.

Here’s how I’d do it in this situation: make your zpools in Proxmox, create a dataset for stuff that you’ll use for VMs and stuff you’ll use for file sharing and then make an LXC container that runs Cockpit with 45Drives’ file sharing plugin. Bind mount the filesharing dataset you made and then you have the best of both worlds—incredibly flexible storage and a great UI for managing samba shares.

[–] Pete90@feddit.de 1 points 1 year ago (2 children)

That's also something I was considering briefly. While I'm waiting for hardware, I did basically that or at least I think I did. Although, I didn't use a bind mount, because I only have one drive for testing, so I created a virtual disk.

What exactly do you mean with bind mount? Mount the data set into the container? I didn't even know, that this was possible. And what is a data set? Sorry, I'm quite new to all this. Thanks!

[–] daftwerder@lemm.ee 1 points 1 year ago

If you create an LXC, then go to Resources --> Add --> Mount point, then you can basically just mount the proxmox drives / folder as a folder within the LXC environment.

[–] tvcvt@lemmy.ml 1 points 1 year ago (1 children)

A bind mount kind of shares a directory on the host with the container. To do it, unless something’s changed in the UI that I don’t remember, you have to edit the LXC config file and add something like:

mp0: /path/on/host,mp=/path/in/container

I usually make a sharing dataset and use that as the target.

[–] Pete90@feddit.de 2 points 1 year ago

Ah, thank you for clearing that up, much appreciated!

[–] asbestos@lemmy.world 1 points 1 year ago* (last edited 1 year ago) (1 children)

Definitely option 2 due to its simplicity and speed gains, but take some time to consider your needs and size the partitions accordingly.

[–] Pete90@feddit.de 2 points 1 year ago (1 children)

Yeah, that is the hardest part. I don't exactly now, how much space will be needed for each use case. But in the end, I can just copy all my data somewhere else, delete and resize to accomodate needs.

[–] wittless@lemmy.world 2 points 1 year ago* (last edited 1 year ago) (1 children)

I personally created the ZFS zpool within proxmox so I had all the space I could give to any of the containers I needed. Then when you create a container, you add a mount point and select the pool as the source and specify the size you want to start with. Then as your needs grow, you can add space to that mount point within proxmox.

Say you have a 6 TB zpool and you create a dataset that is allocated 1 TB. Within that container, you will see a mount point with a size of 1 TB, but in proxmox, you will see that you still have 6TB free because that space isn't used yet. Your containers are basically just quota'ed directories inside the Proxmox hosts's filesystem when you use zpool. And you are free to go into that container's settings and add space to that quota as your needs grow.

[–] Pete90@feddit.de 1 points 1 year ago (1 children)

Ah, very good to know. Then it makes sense to use this approach. Now I only need to figure out, whether I can give my NAS access drives of other VMs, as I might want to download a copy of that data easily. I guess here might be a problem with permissions and file lock, but I'm not sure. I'll look into this option, thanks!

[–] wittless@lemmy.world 1 points 1 year ago* (last edited 1 year ago) (1 children)

I have two containers pointing to the same bind mount. I just had to manually edit the config files in /etc/pve/lxc so that both pointed to the same dataset. I have not had any issues, but you do have to pay attention to file permissions etc. I have one container that writes and the other is read only for the most part, so I don't worry about file lock there. I did it this way because if I recall, you can't do NFS shares within a container without giving it privileged status, and I didn't want to do that.

[–] Pete90@feddit.de 1 points 1 year ago

Excellent, I'll probably do that then. If I think about it, only one container needs write access so I should be good to go. User/permissions will be the same, since it's docker and I have one user for it. Awesome!

[–] Carunga@feddit.de 1 points 1 year ago (1 children)

My setup is pretty much option 1, I have no issues with it. You can easly mount NFS shares as docker volumes (I m docking that for jellyfin and nextclould) but you need to get the permissons right. But I am no expert, just a hobbiest not smart enough for a better solution :)

[–] Pete90@feddit.de 1 points 1 year ago

It's good to know, that it works. I will probably play around for a bit once I get my hardware. Thanks for letting me know!

[–] krolden@lemmy.ml 1 points 1 year ago (1 children)

https://forum.proxmox.com/threads/virtiofsd-in-pve-8-0-x.130531/

I haven't tried it yet but it is definitely something I wanna do when I rebuild some of my services. Apparently its soon on the way to be implemented in proxmox webui but for now you have to hack it in. Or at least that was the case when I read the thread a few months ago.

[–] Pete90@feddit.de 1 points 1 year ago

That sounds very interesting and I'll definetly look into it. Thank you!

[–] StopSpazzing@lemmy.world 1 points 1 year ago (1 children)

I could have sworn I read you shouldn't use zfs on drives smaller than 2tb. IDK maybe I'm going crazy.

[–] Trainguyrom@reddthat.com 1 points 1 year ago (2 children)

I think you're thinking of the rule of thumb for RAID5 or the zfs equivalent raidz1

[–] Pete90@feddit.de 1 points 1 year ago (1 children)

I'm curious. Where is the problem with small drives for RAID5? Too many writes for such a small drive?

[–] Trainguyrom@reddthat.com 1 points 1 year ago (1 children)

It's actually the opposite, with only a single drive of parity, once your hard drive is larger than ~2TB the resilver time for the array is high enough that there's an uncomfortable chance of an additional drive failure while it's resilvering

[–] Pete90@feddit.de 2 points 1 year ago

That makes sense, especially when the drives are equally old. Thanks for explaining it!

[–] StopSpazzing@lemmy.world 1 points 1 year ago

Ah you are right! My bad. Thanks for clearing that up!