this post was submitted on 07 Nov 2024
31 points (89.7% liked)

Selfhosted

40438 readers
422 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Hi folks,

You all have been instrumental to my self-hosting journey, both as inspiration and as a knowledge base when I'm stumped despite my research.

I am finding various different opinions on this and I'm curious what folks here have to say.

I'm running a Debian server accessible only within the home with a number of docker images like paperless-ngx, jellyfin, focalboard, etc. Most of the data actually resides on my NAS via NFS.

  1. Is /mnt or /media the correct place to mount the directories. Is mounting it on the host and mapping the mount point to docker with a bind the best path here?

  2. Additionally, where is the best place to keep my docker-compose? I understand that things will work even if I pick weird locations, but I also believe in the importance of convention. Should this be in the home directory of the server user? I've seen a number of locations mentioned in search results.

  3. Do I have to change the file perms in the locations where I store the docker compose or any config files that don't sit on the other end of NFS?

Any other resources you wish to share are appreciated. I appreciate the helpfulness of this community.

top 15 comments
sorted by: hot top controversial new old
[–] scrubbles@poptalk.scrubbles.tech 19 points 3 weeks ago (3 children)

I wouldn't worry about mounting your nfs shares directly to those host unless you need to. Compose has an operator similar to k8s that lets docker itself manage the shares, which is insanely useful if you lose your host. Then you don't have to have piles of scripts to mount them.

https://stackoverflow.com/questions/45282608/how-to-directly-mount-nfs-share-volume-in-container-using-docker-compose-v3

version: "3.2"

services:
  rsyslog:
    image: jumanjiman/rsyslog
    ports:
      - "514:514"
      - "514:514/udp"
    volumes:
      - type: volume
        source: example
        target: /nfs
        volume:
          nocopy: true
volumes:
  example:
    driver_opts:
      type: "nfs"
      o: "addr=10.40.0.199,nolock,soft,rw"
      device: ":/docker/example"
[–] Lem453@lemmy.ca 8 points 3 weeks ago (1 children)

This is better as well because it prevents the docker from starting if the mount doesn't work. Some apps will freak out if they loose their data and apps if they index files like jellyfin might start deleting the files from the index as the library is now empty.

NFS mount via docker compose is the best way to go

[–] Bakkoda@sh.itjust.works 4 points 3 weeks ago

And some start fine, find /mnt/thefolder unmounted and fill it up with shit. Don't ask me how i know.

[–] bezoar@lemmy.world 4 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

Keep in mind that if you change your nfs server IP in the compose file, you will also need to delete the associated docker volume with "docker volume rm" before restarting. This is a potential issue if your old nfs server is still active, you'll still be accessing the old one. If you have a lot of services and occasionally switch nfs servers (I do this for redundancy, they are synced) it might be easier just to mount nfs in the host and do path:path bind mounts.

DNS is of course the preferred approach

[–] KyuubiNoKitsune@lemmy.blahaj.zone 1 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

I found this to be extremely underperforming. If you plan on doing anything that requires high throughput, don't use the docker NFS operator.

[–] mbirth@lemmy.ml 2 points 3 weeks ago (1 children)

There’s no difference between using a volume in Compose to mount a share or your server’s fstab file. Both do the same kind of mount.

[–] KyuubiNoKitsune@lemmy.blahaj.zone 1 points 3 weeks ago (1 children)

It's doing something different, I was using to mount an AWS FSx for ZFS share on a beefy machine (1.2GB/s network throughput) and was getting less than 50MB/s throughput using docker to mount it, but getting the full 1.2GB/s when mounted outside and mapped to a volume in the container.

[–] mbirth@lemmy.ml 1 points 3 weeks ago (1 children)

How did you mount it outside the cluster? Did you have a look at the mtab and used the exact same options in the compose file?

It's been a few months but I as far as I remember used all the same mounting options

[–] joshhsoj1902@lemmy.ca 8 points 3 weeks ago* (last edited 3 weeks ago)

One thing to consider with NFS is how stable your network is.

I've moved away from storing application files on my NAS and instead I store them locally where I run the application.

For things like jellyfin media or paperless files they can stay on the NAS and be accessed via NFS, but the config, db and other files the apps create as part of their operation, things can get into a bad state if the network drops at an unexpected time.

Instead I setup backup cronjobs that backup those files to the NAS nightly.

I agree with the other commenters regarding using the NFS share mounting right in docker compose. It does work great once you get it working.

[–] mhzawadi@lemmy.horwood.cloud 5 points 3 weeks ago

I have all external mounts in /mnt, if my container needs to use it then it's in the compose file to use the local mount.

All my compose and stacks are in a git repo, the repo lives in my home dir and pulled fromy Gogs server. That only I can access.

[–] ramielrowe@lemmy.world 2 points 3 weeks ago* (last edited 3 weeks ago)

In general, on bare-metal, I mount below /mnt. For a long time, I just mounted in from pre-setup host mounts. But, I use Kubernetes, and you can directly specify a NFS mount. So, I eventually migrated everything to that as I made other updates. I don't think it's horrible to mount from the host, but if docker-compose supports directly defining an NFS volume, that's one less thing to set up if you need to re-provision your docker host.

(quick edit) I don't think docker compose reads and re-reads compose files. They're read when you invoke docker compose but that's it. So...

If you're simply invoking docker compose to interact with things, then I'd say store the compose files where ever makes the most sense for your process. Maybe think about setting up a specific directory on your NFS share and mount that to your docker host(s). I would also consider version controlling your compose files. If you're concerned about secrets, store them in encrypted env files. Something like SOPS can help with this.

As long as the user invoking docker compose can read the compose files, you're good. When it comes to mounting data into containers from NFS.... yes permissions will matter and it might be a pain as it depends on how flexible the container you're using is in terms of user and filesystem permissions.

[–] mbirth@lemmy.ml 2 points 3 weeks ago

I’d suggest /opt/docker/_compose/ for all the compose files. Or, if you keep all the config files for your containers on your NAS, maybe create a share there and put all yml files in it, then mount it on the host. This way everything is on your NAS and nothing is lost if the host freaks out.

And I’d add the NFS mounts to the compose files as well. When specifying volumes, you can use anything the host OS has a mount.xxx command for. Docker will take care of mounting everything.

[–] Strit@lemmy.linuxuserspace.show 1 points 3 weeks ago

1: I have been using subfolder of /mnt for different things when self-hosting. Different external drives go in different subfolders of /mnt. Example: Media drives are mounted at /mnt/media, data drives at /mnt/data etc.

2: I'm lazy. Mine are located in my server users home folder. I then use scripts to sync between them between desktop and server.

3: Just make sure than your server user, the docker user and root user can all read and maybe write to them.