this post was submitted on 07 Sep 2025
83 points (97.7% liked)

Selfhosted

51256 readers
426 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Hello everyone,

I finally managed to get my hands on a Beelink EQ 14 to upgrade from the RPi running DietPi that I have been using for many years to host my services.

I have always was interested in using Proxmox and today is the day. Only problem is I am not sure where to start. For example, do you guys spin up a VM for every service you intend to run? Do you set it up as ext4, btrfs, or zfs? Do you attach external HDD/SSD to expand your storage (beyond the 2 PCIe slots in the Beelink in this example).

I’ve only started reading up on Proxmox just today so I am by no means knowledgeable on the topic

I hope to hear how you guys setup yours and how you use it in terms of hosting all your services (nextcloud, vaultwarden, cgit, pihole, unbound, etc…) and your ”Dos and Don’ts“

Thank you 😊

top 50 comments
sorted by: hot top controversial new old
[–] sj_zero@lotide.fbxl.net 3 points 4 hours ago

I moved to Proxmox a while back and it was a big upgrade for my setup.

I do not use VMs for most of my services. Instead, I run LXC containers. They are lighter and perfect for individual services. To set one up, you need to download a template for an operating system. You can do this right from the Proxmox web interface. Go to the storage that supports LXC templates and click the Download Templates button in the top right corner. Pick something like Debian or Ubuntu. Once the template is downloaded, you can create a new container using it.

The difference between VMs and LXC containers is important. A VM emulates an entire computer, including its own virtual hardware and kernel. This gives you full isolation and lets you run completely different operating systems such as Windows or BSD, but it comes with a heavier resource load. An LXC container just isolates a Linux environment while running on the host system’s kernel. This makes containers much faster and more efficient, but they can only run Linux. Each container can also have its own IP address and act like a separate machine on your network.

I tend to keep all my services in lxc containers, and I run one VM which I use for a jump box I can hop into if need be. It's a pain getting x11 working in a container, so the VM makes more sense.

Before you start creating containers, you will probably need to create a storage pool. I named mine AIDS because I am an edgelord, but you can use a sensible name like pool0 or data.

Make sure you check the Start at boot option for any container or VM you want to come online automatically after a reboot or power outage. If you forget this step, your services will stay offline until you manually start them.

Expanding your storage with an external SSD works well for smaller setups. Longer term, you may want to use a NAS with fast network access. That lets you store your drive images centrally and, if you ever run multiple Proxmox servers, configure hot standby so one server can take over if another fails.

I do not use hot standby myself. My approach is to keep files stored locally, then back them up to my NAS. The NAS in turn performs routine backups to an external drive. This gives me three copies of all my important files, which is a solid backup strategy.

[–] Ron@zegheteens.nl 1 points 7 hours ago

It depends a bit on your needs. My proxmox setup is like multiple nodes (computers) with local (2 drives with ZFS mirrorig), they all use a truenas server as NFS host for data storage. For some things I use conaitners (LXC) but other thing I use VMs.

[–] possiblylinux127@lemmy.zip 1 points 15 hours ago* (last edited 6 hours ago) (1 children)

Install Proxmox with ZFS

Next configure the non enterprise repo or buy a subscription

[–] interdimensionalmeme@lemmy.ml -1 points 9 hours ago* (last edited 9 hours ago) (2 children)

Is there any way to remove ZFS and Ceph, they cause errors and taint the kernel

https://itsfoss.com/linus-torvalds-zfs/

[–] possiblylinux127@lemmy.zip 4 points 6 hours ago (1 children)

There isn't anything better than ZFS at the moment. Having a tainted kernel doesn't really mean much.

[–] interdimensionalmeme@lemmy.ml 1 points 12 minutes ago

Except having slightly better deduplication, I don't the see what justifies the extra complexity and living under the bad aura of Oracle. LVM does almost everything ZFS does, it's just less abstracted, which I like actually because I want to know on what hard drive my stuff is, not some mushy file cloud that either all works or is all gone.

[–] 3dcadmin@lemmy.relayeasy.com 4 points 7 hours ago

Whilst on many things I respect Linus he is always opinionated to the max. I seem to remember him doing the same over hardware raid and then software raid as well over the years - so Linus what would you like us to use for some data security eh? I do have to throw in that this article is also from 2023 - a long time and a whole host of the issues he says are simply not true any longer

[–] drkt@scribe.disroot.org 16 points 1 day ago (4 children)

I recommend you use containers instead of VMs when possible, as VMs have a huge overhead by comparison, but yes. Each service gets its own container, unless 2 services need to share data. My music container, for example, is host to both Gonic, slskd and Samba.

[–] possiblylinux127@lemmy.zip 1 points 15 hours ago

I wouldn't do that as it complicates things unnecessarily. I would just run a container runtime inside LXC or VM.

[–] MangoPenguin@lemmy.blahaj.zone 5 points 1 day ago* (last edited 1 day ago)

There is barely any overhead with a Linux VM, a Debian minimal install only uses about 30MB of RAM! As an end user i find performance to be very similar with either setup.

[–] modeh@piefed.social 6 points 1 day ago (1 children)
[–] drkt@scribe.disroot.org 11 points 1 day ago (2 children)

Correct.

Side note- people will tell you not to put dockers in an LXC but fuck em. I don't want to pollute my hypervisor with docker's bullshit and the performance impact is negligeable.

[–] felbane@lemmy.world 3 points 21 hours ago* (last edited 21 hours ago)

I wouldn't recommend running docker/podman in LXC, but that's just because it seems to run better as a full VM in my experience.

No sense running it in the hypervisor, agreed.

LXC is great for everything else.

[–] Hominine@lemmy.world 6 points 1 day ago

There are dozens of us!

[–] zingo@sh.itjust.works 0 points 1 day ago* (last edited 1 day ago) (1 children)

as VMs have a huge overhead by comparison.

Not at all. The benefits outweighs the slight increased RAM usage by a huge margin.

I have Urbackup running in a dietpi VM. I have it set for 256mb of RAM. That includes the OS and the Urbackup service. It works perfectly fine.

I have an alpine VM that runs 32 docker containers using about 3.5GB of RAM. I wouldn't call that bloat by any means.

[–] drkt@scribe.disroot.org 3 points 1 day ago* (last edited 1 day ago) (1 children)

A fresh Debian container uses 22 MiB of RAM. A fresh debian VM uses 200+ MiB of RAM.
A VM has to translate every single hardware interaction, a container doesn't.

I don't want to fuck flies about the definition of 'huge' with you, but that's kind of a huge difference.

[–] zingo@sh.itjust.works -4 points 1 day ago* (last edited 1 day ago)

Translate? You know that a CPU sits idle most of the time right?

What kind of potato are you running? Also, how many hundred services do you run on it anyway, complaining about 200mb. You better off running docker on baremetal, if you are that worried.

Do you know how much RAM Windows 11 uses on idle?

WTF

[–] jubilationtcornpone@sh.itjust.works 13 points 1 day ago (2 children)

I use one VM per service. WAN facing services, of which I only have a couple, are on a separate DMZ subnet and are firewalled off from the LAN.

It's probably little overkill for a self hosted setup but I have enough server resources, experience, and paranoia to support it.

[–] anamethatisnt@sopuli.xyz 9 points 1 day ago (1 children)

I prefer running true vms too, but it is resource intensive.
Playing with lxcs and docker could allow one to run more services on a little beelink.

[–] jubilationtcornpone@sh.itjust.works 3 points 1 day ago* (last edited 1 day ago) (1 children)

Yeah, with something that size you're pretty much limited to containers.

Edit: Which is totally fine, OP. Self hosting is an opportunity to learn and your setup can be easily changed as your needs change over time.

[–] lucas@startrek.website 2 points 9 hours ago (1 children)

Am I looking at the wrong device? Beelink EQ15 looks like it has an N150 and looks like 16GB of ram? That's plenty for quite few VMs. I run an N100 minipc with only 8GB of RAM and about half a dozen VMs and a similar number of LXC containers. As long as you're careful about only provisioning what each VM actually needs, it can be plenty.

In this situation it's not necessarily that it's the "right" or "wrong" device. The better question is, "does it meet your needs?" There are pros and cons to running each service in its own VM. One of the cons is the overhead consumed by the VM OS. Sometimes that's a necessary sacrifice.

Some of the advantages of running a system like Proxmox are that it's easily scalable and you're not locked into specific hardware. If your current Beelink doesn't prove to be enough, you can just add another one to the cluster or add a different host and Proxmox doesn't care what it is.

TLDR: it's adequate until it's not. When it's not, it's an easy fix.

[–] modeh@piefed.social 1 points 1 day ago (2 children)

I have a couple of publicly accessible services (vaultwarden, git, and searxng). Do you place them on a separate subnet via proxmox or through the router?

My understanding in networking is fundamental enough to properly setup OpenWrt with an inbound and outbound VPN tunnels along with policy based routing, and that’s where my networking knowledge ends.

[–] anamethatisnt@sopuli.xyz 2 points 1 day ago (1 children)

Unless you wanna expose services to others my recommendation is always to hide your services behind a vpn connection.

[–] modeh@piefed.social 3 points 1 day ago (1 children)

I travel internationally and some of the countries In been to have been blocking my wireguard tunnel back home preventing me from accessing my vault. I tried setting it up with shadowsocks and broke my entire setup so I ended up resetting it.

Any suggestions that is not tailscale?

[–] anamethatisnt@sopuli.xyz 1 points 1 day ago

I find setting up an openvpn server with self-signed certificates + username and password login works well. You can even run it on tcp/443 instead of tcp/1194 if you want to make it less likely to be blocked.

@modeh We should talk - I am using Proxmox and #openwrt. I am setting up a dmz for publoc services with external ports exposed. (but failing)

[–] hobbsc@lemmy.sdf.org 4 points 1 day ago

i have very few services and tend to lean into virtual machines instead of containers out of habit. i have proxmox running on an old mini-pc that needs to be replaced at some point. 16GB of RAM in it, 4 cores on the CPU (it's an i3 at 2ghz), and a 100GB SSD.

VMs and services are as follows:

  • ubuntu vm
    • runs my omada controller in docker
    • used to run all of my containers in docker but i migrated them to podman
  • fedora vm
    • runs several containers via podman
      • alexandrite, where i'm composing this now!
      • uptime kuma
      • redlib for browsing reddit
      • kanboard for organizing my contracting work
  • dietpi in a vm to run pi-hole (migrated here when my pi zero-w cooked itself)
    • this also handles internal dns for each server so i don't have to type out IP addresses
  • home assistant HAOS vm

home assistant backs itself up to my craptastic nas and the rest of the stuff doesn't really have any backups. i wouldn't be upset if they died, except for my kanboard instance. i can rebuild that from scratch if needed.

i'll be investing in a new mini-pc and some more disks soon, though.

[–] Lyra_Lycan@lemmy.blahaj.zone 5 points 1 day ago (1 children)

For inspiration, here's my list of services:

Name ID No. Primary Use
heart (Node) ProxMox
guard (CT) 202 AdGuard Home
management (CT) 203 NginX Proxy Manager
smarthome (VM) 804 Home Assistant
HEIMDALLR (CT) 205 Samba/Nextcloud
authentication (VM) 806 BitWarden
mail (VM) 807 Mailcow
notes (CT) 208 CouchDB
messaging (CT) 209 Prosody
media (CT) 211 Emby
music (CT) 212 Navidrome
books (CT) 213 AudioBookShelf
security (CT) 214 AgentDVR
realms (CT) 216 Minecraft Server
blog (CT) 217 Ghost
ourtube (CT) 218 ytdl-sub YouTube Archive
cloud (CT) 219 NextCloud
remote (CT) 221 Rustdesk Server

Here is the overhead for everything. CPU is an i3 6100 and RAM is 2133MHz:

Quick note about my setup, some things threw a permissions hissy fit when in separate containers, so Media actually has Emby, Sonarr, Radarr, Prowlarr and two instances of qBittorrent. A few of my containers do have supplementary programs.

[–] modeh@piefed.social 1 points 5 hours ago (1 children)

Thank you, that’s actually quite informative. Gives me a good idea of what could go where in terms of my setup.

So far I recreated my RPi DietPi setup in a VM but for some reason Pi-Hole + Unbound combo is now fucking with my internet connectivity. It is so weird, I assigned it a static lease for the old RPi IP address in OpenWrt and left all the rules in there intact and you would think it would be a “drop-in replacement” but it isn’t. Not sure if Proxmox has some weird firewall situation going on. Definitely need to fuck around more with it to better understand it.

[–] lemming741@lemmy.world 1 points 1 hour ago

To piggyback on the permissions hissy fit-

My aar stack, openmediavault, and transmission stack have different usernames mapped to the same uid and it is a pain in the ass. I "fixed it" by making a NAS group that catches them all, but by "fixed it" I really mean "got it working"

So be aware of what uid will own a file and maybe change it to a uid in the 1100+ range to make NFS easier in the future.

[–] BOFH666@lemmy.world 7 points 1 day ago (1 children)

Replace cgit with Forgejo. I really like the software from Jason, but Forgejo is a huge difference

[–] modeh@piefed.social 5 points 1 day ago

Only reason I am thinking cgit is because I want a simple interface to show repos and commit history, not interested in doing pull requests, opening issues, etc…

I feel Forgejo would be “killing an ant with a sledgehammer” kinda situation for my needs.

Nonetheless, thank you for your suggestion.

@modeh Id love to meet others who are just starting out with Proxmox and do some casual video calls/chats European tomezones to learn together /try stuff out.

[–] MangoPenguin@piefed.social 4 points 1 day ago

I have a single container for docker that runs 95% of services, and a few other containers and VMs for things that aren't docker, or are windows/osx.

ext4 is the simple easy option, I tend to pick that on systems with lower amounts of RAM since ZFS does need some RAM for itself.

I do have an external USB HDD for backups to be stored on.

[–] anamethatisnt@sopuli.xyz 6 points 1 day ago (2 children)

I would start with one VM running portainer and once that is up and running I would recommend learning how to backup and restore the VM. If you have enough disks I would look into ZFS RAID 1 for redundancy.
https://pve.proxmox.com/wiki/ZFS_on_Linux
Learning the redundancy and backup systems before having too many services active allows you to screw up and redo.

[–] modeh@piefed.social 3 points 1 day ago (1 children)

The Beelink comes with two PCIe slots, so I have two internal drives for now. Is it acceptable to attach external HDDs and set them up in a RAID configuration with the internal ones? I do plan on the Beelink being a NAS too (limited budget, can’t afford a separate dedicated NAS at the moment)

[–] anamethatisnt@sopuli.xyz 3 points 1 day ago

I wouldn't use RAID on USB.
If you only got 2x m.2 slots then I would probably prioritize disk space over RAID1 and ensure you got a backup up and running. There are m.2 to sata adapters but your Bee-link doesn't have a suitable psu for that.

[–] SidewaysHighways@lemmy.world 3 points 1 day ago (1 children)

portainer is cool. dockge is 😎

[–] anamethatisnt@sopuli.xyz 2 points 1 day ago

I remember trying both back when my server was new but missing something in dockge, can't remember what right now.

[–] catrass@lemmy.zip 5 points 1 day ago (1 children)

As with most things homelab related, there is no real "right" or "wrong" way, because its about learning and playing around with cool new stuff! If you want to learn about different file systems, architectures, and software, do some reading, spin up a test VM (or LXC, my preference), and go nuts!

That being said, my architecture is built up of general purpose LXCs (one for my Arr stack, one for my game servers, one for my web stuff, etc). Each LXC runs the related services in docker, which all connect to a central Portainer instance for management.

Some things are exceptions though, such as Open Media Vault and HomeAssistant, which seem to work better as standalone VMs.

The services I run are usually something that are useful for me, and that I want to keep off public clouds. Vaultwarden for passwords and passkeys, DoneTick for my todo-list, etc. If I have a gap in my digital toolkit, I always look for something that I can host myself to fill thay gap. But also a lot of stuff I want to learn about, such as the Grafana stack for observability at the moment.

[–] modeh@piefed.social 1 points 1 day ago (2 children)

Thank you.

I guess I have more reading to do on Portainer and LXC. Using an RPi with DietPi, I didn’t have the need to learn any of this. Now is a good time as ever.

But generally speaking, how is a Linux container different (or worse) than a VM?

[–] Lyra_Lycan@lemmy.blahaj.zone 4 points 1 day ago* (last edited 1 day ago)

An LXC is isolated, system-wise, by default (unprivileged) and has very low resource requirements.

  • Storage also expands when needed, i.e. you can say it can have 40GB but it'll only use as much as needed and nothing bad will happen if your allocated storage is higher than your actual storage.. Until the total usage approaches 100%. So there's some flexibility. With a VM the storage is definite.
  • Usually a Debian 12 container image takes up ~1.5GB.
  • LXCs are perfectly good for most use cases. VMs, for me, only come in when necessary, when the desired program has more needs like root privileges, in which case a VM is much safer than giving an LXC access to the Proxmox system. Or when the program is a full OS, in the case of Home Assistant.

Separating each service ensures that if something breaks, there are zero collateral casualties.

[–] anamethatisnt@sopuli.xyz 5 points 1 day ago

A VM is properly isolated and has it's own OS and kernel. This improves security at the cost of overhead.
If you are starved for hardware resources then running lxcs instead of vms could give you more bang for the buck.

[–] Zwuzelmaus@feddit.org 4 points 1 day ago* (last edited 1 day ago) (1 children)

You have that new machine to play with. So do it.

Install it and play around. If you do nothing that should "last forever" in these first days, you can tear it down and do it again in different ways.

I have recently played in the same way with the proxmox unattended install feature, and it was a lot fun. One text file and a bootable image on a stick.

[–] modeh@piefed.social 1 points 1 day ago (1 children)

Oh yeah, absolutely will do. Was simply hoping to get an idea of how self-hosters who’ve been using it for a while now set it up to get a rough picture of where I want to be once I am done screwing around with it.

[–] nis@feddit.dk 1 points 1 day ago* (last edited 1 day ago)

I've been doing it for a couple of years. I don't think I'll ever be done screwing around with it.

Embrace the flux :)

[–] abeorch@friendica.ginestes.es 2 points 1 day ago (1 children)

@modeh Certainly no expert but would starting with setting up some cloudint image templates be somewhere in there?

[–] modeh@piefed.social 1 points 1 day ago (1 children)

Not even sure what that is, so most likely a no for me.

[–] incentive@lemmy.ml 2 points 1 day ago

Template for setting up your new VMs - after setting up your first template its a few clicks and deploy for new VMs