this post was submitted on 24 Sep 2025
154 points (95.3% liked)

Selfhosted

52631 readers
604 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Curious to know what the experiences are for those who are sticking to bare metal. Would like to better understand what keeps such admins from migrating to containers, Docker, Podman, Virtual Machines, etc. What keeps you on bare metal in 2025?

(page 2) 50 comments
sorted by: hot top controversial new old
[–] lka1988@lemmy.dbzer0.com 4 points 1 month ago (3 children)

I run my NAS and Home Assistant on bare metal.

  • NAS: OMV on a Mac mini with a separate drive case
  • Home Assistant: HAOS on a Lenovo M710q, since 1) it has a USB zigbee adapter and 2) HAOS on bare metal is more flexible

Both of those are much easier to manage on bare metal. Everything else runs virtualized on my Proxmox cluster, whether it's Docker stacks on a dedicated VM, an application that I want to run separately in an LXC, or something heavier in its own VM.

load more comments (3 replies)
[–] medem@lemmy.wtf 4 points 1 month ago

The fact that I bought all my machines used (and mostly on sale), and that not one of them is general purpose, id est, I bought each piece of hardware with a (more or less) concrete idea of what would be its use case. For example, my machine acting as a file server is way bigger and faster than my desktop, and I have a 20-year-old machine with very modest specs whose only purpose is being a dumb client for all the bigger servers. I develop programs in one machine and surf the internet and watch videos on the other. I have no use case for VMs besides the Logical Domains I setup in one of my SPARC hosts.

[–] brucethemoose@lemmy.world 4 points 1 month ago* (last edited 1 month ago) (2 children)

In my case it’s performance and sheer RAM need.

GLM 4.5 needs like 112GB RAM and absolutely every megabyte of VRAM from the GPU, at least without the quantization getting too compressed to use. I’m already swapping a tiny bit and simply cannot afford the overhead.

I think containers may slow down CPU<->GPU transfers slightly, but don’t quote me on that.

load more comments (2 replies)
[–] Evotech@lemmy.world 4 points 1 month ago

It's just another system to maintain, another link in the chain that can fail.

I run all my services on my personal gaming pc.

[–] nuggie_ss@lemmings.world 4 points 1 month ago

Warms me heart to see people in this thread thinking for themselves and not doing something just because other people are.

[–] 9tr6gyp3@lemmy.world 3 points 1 month ago (3 children)

I thought about running something like proxmox, but everything is too pooled, too specialized, or proxmox doesn't provide the packages I want to use.

Just went with arch as the host OS and firejail or lxc any processes i want contained.

load more comments (3 replies)
[–] LifeInMultipleChoice@lemmy.world 3 points 1 month ago (1 children)

For me it's lack of understanding usually. I haven't sat down and really learned what docker is/does. And when I tried to use it once I ended up with errors (thankfully they all seemed contained by the docker) but I just haven't gotten around to looking more into than seeing suggestions to install say Pihole in it. Pretty sure I installed Pihole outside of one. Jellyfin outside, copyparty outside, and I something else im forgetting at the moment.

I was thinking of installing a chat app in one, but I put off that project because I got busy at work and it's not something I normally use.

I guess I just haven't been forced to see the upsides yet. But am always wanting to learn

[–] slazer2au@lemmy.world 3 points 1 month ago (5 children)

containerisation is to applications as virtual machines are to hardware.

VMs share the same CPU, memory, and storage on the same host.
Containers share the same binaries in an OS.

load more comments (5 replies)
[–] DarkMetatron@feddit.org 3 points 1 month ago

My servers and NAS were created long before Docker was a thing, and as I am running them on a rolling release distribution there never was a reason to change anything. It works perfectly fine the way it is, and it will most likely run perfectly fine the next 10+ years too.

Well I am planning, when I find the time to research a good successor, to replace my aging HPE ProLiant MicroServer Gen8 that I use as Homeserver/NAS. Maybe I will then setup everything clean and migrate the services to docker/podman/whatever is fancy then. But most likely I will only transfer all the disks and keep the old system running on newer hardware. Life is short...

[–] hperrin@lemmy.ca 3 points 1 month ago

There’s one thing I’m hosting on bare metal, a WebDAV server. I’m running it on the host because it uses PAM for authentication, and that doesn’t work in a container.

[–] tofu@lemmy.nocturnal.garden 3 points 1 month ago (2 children)

TrueNAS is on bare metal has I have a dedicated NAS machine that's not doing everything else and also is not recommended to virtualize. Not sure if that counts.

Same for the firewall (opnsense) since it is it's own machine.

load more comments (2 replies)
[–] corsicanguppy@lemmy.ca 3 points 1 month ago (1 children)

I don't host on containers because I used to do OS security for a while.

load more comments (1 replies)
[–] towerful@programming.dev 3 points 1 month ago (2 children)

I would always run proxmox to set up docker VMs.

I found Talos Linux, which is a dedicated distro for kubernetes. Which aligned with my desire to learn k8s.
It was great. I ran it as bare-metal on a 3 node cluster. I learned a lot, I got my project complete, everything went fine.
I will use Talos Linux again.
However next time, I'm running proxmox with 2 VMs per node - 3 talos control VMs and 3 talos worker VMs.
I imagine running 6 servers with Talos is the way to go. Running them hyperconverged was a massive pain. Separating control plane and data/worker plane (or whatever it is) makes sense - it's the way k8s is designed.
It wasn't the hardware that had issues, but various workloads. And being able to restart or wipe a control node or a worker node would've made things so much easier.

Also, why wouldn't I run proxmox?
Overhead is minimal, get nice overview, get a nice UI, and I get snapshots and backups

load more comments (2 replies)
[–] Kurious84@lemmings.world 3 points 1 month ago

Anything you want dedicated performance on or require fine tuning for a specific performance use cases. Theyre out there.

[–] Surp@lemmy.world 3 points 1 month ago (1 children)

What are you doing running your vms on bare metal? Time is a flat circle.

load more comments (1 replies)
[–] jaemo@sh.itjust.works 3 points 1 month ago

I generally abstract to docker anything I don't want to bother with and just have it work.

If I'm working on something that requires lots of back and forth syncing between host and container, I'll run that on bare metal and have it talk to things in docker.

Ie: working on an app or a website or something in language of choice on framework of choice, but postgres and redis are living in docker. Just the app I'm messing with and it's direct dependencies run outside.

[–] Andres4NY@social.ridetrans.it 3 points 1 month ago (3 children)

@kiol I mean, I use both. If something has a Debian package and is well-maintained, I'll happily use that. For example, prosody is packaged nicely, there's no need for a container there. I also don't want to upgrade to the latest version all the time. Or Dovecot, which just had a nasty cache bug in the latest version that allows people to view other peoples' mailboxes. Since I'm still on Debian 12 on my mail server, I remain unaffected and I can let the bugs be shaken out before I upgrade.

load more comments (3 replies)
[–] akincisor@sh.itjust.works 3 points 1 month ago (2 children)

I have a single micro itx htpc/media server/nas in my bedroom. Why use containers?

load more comments (2 replies)
[–] TheMightyCat@ani.social 2 points 1 month ago (1 children)

I'm selfhosting Forgejo and i don't really see the benefit of migrating to a container, i can easily install and update it via the package manager so what benefit does containerization give?

load more comments (1 replies)
[–] 51dusty@lemmy.world 2 points 1 month ago (2 children)

my two bare metal servers are the file server and music server. I have other services in a pi cluster.

file server because I can't think of why I would need to use a container.

the music software is proprietary and requires additional complications to get it to work properly...or at all, in a container. it also does not like sharing resources and is CPU heavy when playing to multiple sources.

if either of these machines die, a temporary replacement can be sourced very easily(e.g. the back of my server closet) and recreated from backups while I purchase new or fix/rebuild the broken one.

IMO the only reliable method for containers is a cluster because if you're running several containers on a device and it fails you've lost several services.

load more comments (2 replies)
[–] Jerry@feddit.online 2 points 1 month ago (1 children)

Depends on the application for me. For Mastodon, I want to allow 12K character posts, more than 4 poll question choices, and custom themes. Can't do it with Docker containers. For Peertube, Mobilizon, and Peertube, I use Docker containers.

[–] kiol@lemmy.world 2 points 1 month ago (1 children)

Why could you not have that Mastodon setup in containers? Sounds normal afaik

[–] farcaller@fstab.sh 3 points 1 month ago

I’ll chime in: simplicity. It's much easier to keep a few patches that apply to local OS builds: I use Nix, so my Mastodon microVM config just has an extra patch line. If there's a new Mastodon update, the patch most probably will work for it too.

Yes, I could build my own Docker container, but you can’t easily build it with a patch (for Mastodon specifically, you need to patch js pre-minification). It's doable, but it's quite annoying. And then you need to keep track of upstream and update your Dockerfile with new versions.

[–] tychosmoose@lemmy.world 2 points 1 month ago

I'm doing this on a couple of machines. Only running NFS, Plex (looking at a Jellyfin migration soon), Home Assistant, LibreNMS and some really small other stuff. Not using VMs or LXC due to low-end hardware (pi and older tiny pc). Not using containers due to lack of experience with it and a little discomfort with the central daemon model of Docker, running containers built by people I don't know.

The migration path I'm working on for myself is changing to Podman quadlets for rootless, more isolation between containers, and the benefits of management and updates via Systemd. So far my testing for that migration has been slow due to other projects. I'll probably get it rolling on Debian 13 soon.

[–] bizarroland@lemmy.world 2 points 1 month ago (1 children)

I'm running a TrueNAS server on bare metal with a handful of hard drives. I have virtualized it in the past, but meh, I'm also using TrueNAS's internal features to host a jellyfin server and a couple of other easy to deploy containers.

[–] kiol@lemmy.world 2 points 1 month ago (1 children)

So Truenas itself is running your containers?

load more comments (1 replies)
load more comments
view more: ‹ prev next ›