this post was submitted on 27 Jun 2024
19 points (95.2% liked)

Selfhosted

40329 readers
341 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

Hi! I'm starting out with self-hosting. I was setting up Grafana for system monitoring of my mini-PC. However, I ran into issue of keeping credentials secure in my Docker Compose file. I ended up using Docker Swarm since it was the path of least resistance. I've managed to set up Grafana/Prometheus/Node stack and it's working well.

However, before continuing with Docker Swarm, I want to check if this is a good idea or will I potentially dig myself into a corner? Some of the options I've found while searching:

  • Continue with Docker Swarm and look into automation of stack/swarm in future

    • Ansible playbook has plugins for Docker Swarm.
  • Self-hosted vault: I want to avoid hosting my own secret/password manager at the moment.

  • Kubernetes (k8s / k3s) - I don't wanna 😭

    • More seriously, I'm actually learning this for work but don't see the point of implementing it at home. The extra overhead doesn't seem worth it for a single node cluster.
  • ~~Live dangerously - Store crdentials in plaintext. Also use admin as password for everything~~

Edit: Most of the services I'm planning on hosting will likely be a single replica service.

all 12 comments
sorted by: hot top controversial new old
[–] farcaller@fstab.sh 9 points 4 months ago (1 children)

I run k3s in my homelab as a single node cluster. I’m very familiar with kubernetes in general, so it's just easier for me to reason with a control plane.

Some of the benefits I find useful:

  • ArgoCD set to fire and forget will automatically update software versions as they happen. I use nix to lower the burden of maintaining my chart forks. Sometimes they break, but
  • VictoriaMetrics easily collects all the metrics from everything in the cluster with very little manual tinkering, so I am notified when things break, and
  • zfs-localpv provides in-cluster management for data snapshots, so when things do break I can easily roll back to a known good state.

k3s is, of course, a memory hog, I'd estimate it and cilium (my CNS of choice) eat up about 2Gb ram and a bit under one core. It's something you can tune to some extent, though. But then, I can easily do pod routing via VPN and create services that will automatically get a public IP from my endless IPv6 pool and get that address assigned a DNS name in like 10 lines of Yaml.

[–] mhzawadi@lemmy.horwood.cloud 5 points 4 months ago

I use swarm in my home lab, I don't have any docker things at work so Kubernetes is way more then I want to manage.

All my stacks are in a git repo, I have an ansible playbook to update them if needed. I also have most things tracked on new releases (https://newreleases.io/) so I know when something needs an update, then I can either update the git repo by hand or use ansible.

Also have a look at docker contexts, you can manage your swarm from a remote location.

[–] Decronym@lemmy.decronym.xyz 4 points 4 months ago* (last edited 4 months ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
DNS Domain Name Service/System
Git Popular version control system, primarily for code
IP Internet Protocol
NAS Network-Attached Storage
NFS Network File System, a Unix-based file-sharing protocol known for performance and efficiency
SMB Server Message Block protocol for file and printer sharing; Windows-native
VPN Virtual Private Network
k8s Kubernetes container management package

8 acronyms in this thread; the most compressed thread commented on today has 13 acronyms.

[Thread #834 for this sub, first seen 27th Jun 2024, 04:45] [FAQ] [Full list] [Contact] [Source code]

[–] MigratingtoLemmy@lemmy.world 3 points 4 months ago

I'm using k8s at work and am planning to set up k3s at home, because even though PVCs and Ingresses are not the easiest to grasp and write in templates, I think the way I want to do storage is beyond the capabilities of podman which I used earlier. Also, Kubernetes on either end so knowledge transfer is ready

[–] emhl@feddit.org 3 points 4 months ago (1 children)

What are your reasons for using docker swarm instead normal docker if you don't want to do replication?

[–] UncommonBagOfLoot@lemmy.world 3 points 4 months ago (2 children)

To use Docker secrets so that the secrets are encrypted on the host. Using Docker Swarm was the path of least resistance to set up my system monitoring stack.

Docker Compose can use secrets without Swarm, but my understanding is that the those are in plaintext on the host.

Docker Swarm encryption doesn't work for your use case. The documentation says that the secret is stored encrypted but can be decrypted by the swarm manager nodes and nodes running services that use the service, which both apply to your single node. If you're not having to unlock Docker Compose on startup, that means that the encrypted value and the decryption key live next to each other on the same computer and anyone who has access to the encrypted secrets can also decrypt them.

[–] Lem453@lemmy.ca 4 points 4 months ago* (last edited 4 months ago)

When I was starting out I almost went down the same pathway. In the end, docker secrets are mainly useful when the same key needs to be distributed around multiple nodes.

Storing the keys locally in an env file that is only accessible to the docker user is close enough to the same thing for home use and greatly simplifies your setup.

I would suggest using a folder for each stack that contains 1 docker compose file and one env file. The env file contains passwords, the rest of the env variables are defined in the docker compose itself. Exclude the env files from your git repo (if you use this for version control) so you never check in a secret to your git repo (in practice I have one folder for compose files that is on git and my env files are stored in a different folder not in git).

I do this all via portainer, it will setup the above folder structure for you. Each stack is a compose file that portainer pulls from my self hosted gitea (on another machine). Portainer creates an env file itself when you add the env variables from the gui.

If someone gets access to your system and is able to access the env file, they already have high level access and your system is compromised regardless of if you have the secrets encrypted via swarm or not.

[–] phillipgreenii@lemmy.world 2 points 4 months ago

My personal experience with swarm has been terrible. I would not recommend it for anyone. For me it is full of foot-guns. I found it difficult to debug when things go wacky. The last time I checked, the project was dead, but it feels like it. It doesn't seem to be many people using it, because I find it difficult finding answers. In addition, there was a docker swarm python script originally, but then it was kinda/sorta implemented into docker itself. They work similarly, but not the same. I often got hung up following directions from the wrong one. I'm in the middle of migrating to k3s and nix.

Others have talked about a good experience with it, but that wasn't my story. Is docker and docker compose work for you, then stick with it. If you want something more, I would recommend looking at k3s before jumping into docker swarm.

On additional note, I have multiple nodes, which is why I went to docker swarm instead of sticking with docker compose. Having only one node might hide some of the issues I had/have with docker swarm .

[–] RegalPotoo@lemmy.world 2 points 4 months ago

I was in the same place as you a few years ago - I liked swarm, and was a bit intimidated by kubernetes - so I'd encourage you to take a stab at kubernetes. Everything you like about swam kubernetes does better, and tools like k3s make it super simple to get set up. There _is& a learning curve, but I'd say it's worth it. Swarm is more or less a dead end tech at this point, and there are a lot more resources about kubernetes out there.

[–] lal309@lemmy.world 2 points 4 months ago

I don’t have an answer for you but I have one instead. When I attempted to do swarm my biggest challenge was shared storage. I was attempting to run a swarm with shared storage on a NAS. Literally could not run apps, ran into a ton of problems running stacks (NAS share tried SMB and NFS). How did you get around this problem?