this post was submitted on 24 Sep 2025
154 points (95.3% liked)

Selfhosted

52646 readers
986 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Curious to know what the experiences are for those who are sticking to bare metal. Would like to better understand what keeps such admins from migrating to containers, Docker, Podman, Virtual Machines, etc. What keeps you on bare metal in 2025?

(page 3) 50 comments
sorted by: hot top controversial new old
[–] Routhinator@startrek.website 2 points 1 month ago

I'm running Kube on baremetal.

[–] eleitl@lemmy.zip 2 points 1 month ago

Obviously, you host your own hypervisor on own or rented bare metal.

[–] frezik@lemmy.blahaj.zone 2 points 1 month ago (1 children)

My file server is also the container/VM host. It does NAS duties while containers/VMs do the other services.

OPNsense is its own box because I prefer to separate it for security reasons.

Pihole is on its own RPi because that was easier to setup. I might move that functionality to the AdGuard plugin on OPNsense.

[–] HiTekRedNek@lemmy.world 1 points 1 month ago

My reasons for keeping OpnSense on bare metal mirror yours. But additionally I don't want my network to take a crap because my proxmox box goes down.

I constantly am tweaking that machine...

[–] SailorFuzz@lemmy.world 2 points 1 month ago (4 children)

Mainly that I don't understand how to use containers... or VMs that well... I have and old MyCloud NAS and little pucky PC that I wanted to run simple QoL services on... HomeAssistant, JellyFin etc...

I got Proxmox installed on it, I can access it.... I don't know what the fuck I'm doing... There was a website that allowed you to just run scripts on shell to install a lot of things... but now none of those work becuase it says my version of Proxmox is wrong (when it's not?)... so those don't work....

And at least VMs are easy(ish) to understand. Fake computer with OS... easy. I've built PCs before, I get it..... Containers just never want to work, or I don't understand wtf to do to make them work.

I wanted to run a Zulip or Rocket.chat for internal messaging around the house (wife and I both work at home, kid does home/virtualschool).... wanted to use a container because a service that simple doesn't feel like it needs a whole VM..... but it won't work...

load more comments (4 replies)
[–] bhamlin@lemmy.world 2 points 1 month ago

It depends on the service and the desired level of it stack.

I generally will run services directly on things like a raspberry pi because VMs and containers offer added complexity that isn't really suitable for the task.

At work, I run services in docker in VMs because the benefits far outweigh the complexity.

[–] pineapplelover@lemmy.dbzer0.com 2 points 1 month ago

All I have is Minecraft and a discord bot so I don't think it justifies vms

[–] kossa@feddit.org 2 points 1 month ago

Well, that is how I started out. Docker was not around yet (or not mainstream enough, maybe). So it is basically a legacy thing.

My main machine is a Frankenstein monster by now, so I am gradually moving. But since the days when I started out, time has become a scarce resource, so the process is painfully slow.

[–] OnfireNFS@lemmy.world 2 points 1 month ago

This reminds me of a question I saw a couple years ago. It was basically why would you stick with bare metal over running Proxmox with a single VM.

It kinda stuck with me and since then I've reimaged some of my bare metal servers with exactly that. It just makes backup and restore/snapshots so much easier. It's also really convenient to have a web interface to manage the computer

Probably doesn't work for everyone but it works for me

[–] erock@lemmy.ml 2 points 1 month ago

Here’s my homelab journey: https://bower.sh/homelab

Basically, containers and GPU is annoying to deal with, GPU pass through to a VM is even more annoying. Most modern hobbyist GPUs also do not support splitting your GPU. At the end of the day, it’s a bunch of tinkering which is valuable if that’s your goal. I learned what I wanted, now I’m back to arch running everything with systemd and quadlet

[–] jet@hackertalks.com 1 points 1 month ago

KISS

The more complicated the machine the more chances for failure.

Remote management plus bare metal just works, it's very simple, and you get the maximum out of the hardware.

Depending on your use case that could be very important

Depends on the application. My NAS is bare metal. That box does exactly one thing and one thing only, and it's something that is trivial to setup and maintain.

Nextcloud is running in docker (AIO image) on bare metal (Proxmox OS) to balance performance with ease of maintenance. Backups go to the NAS.

Everything else is running on in a VM which makes backups and restores simpler for me.

load more comments
view more: ‹ prev next ›