this post was submitted on 13 Dec 2023
232 points (97.9% liked)

Selfhosted

40347 readers
323 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I'm a retired Unix admin. It was my job from the early '90s until the mid '10s. I've kept somewhat current ever since by running various machines at home. So far I've managed to avoid using Docker at home even though I have a decent understanding of how it works - I stopped being a sysadmin in the mid '10s, I still worked for a technology company and did plenty of "interesting" reading and training.

It seems that more and more stuff that I want to run at home is being delivered as Docker-first and I have to really go out of my way to find a non-Docker install.

I'm thinking it's no longer a fad and I should invest some time getting comfortable with it?

you are viewing a single comment's thread
view the rest of the comments
[–] null@slrpnk.net 2 points 11 months ago (1 children)

It works well for 2-3 services, but as the number of services grew they started to interfere with each other

Can you expand on that? I use docker-compose and have probably around 10 services on the same box. I don't forsee any limitations beyond hardware if I wanted to just keep adding more, but maybe I'm missing something.

[–] akash_rawal@lemmy.world 1 points 11 months ago* (last edited 11 months ago) (1 children)

If one service needs to connect to another service then I have to add a shared network between them. In that case, the services essentially shared a common namespace regarding DNS. DNS resolution would routinely leak from one service to another and cause outages, e.g if I connect Gitlab container and Redmine container with OpenLDAP container then sometimes Redmine's nginx container would access Gitlab container instead of Redmine container and Gitlab container would access Redmine's DB instead of its own DB.

I maintained some workarounds, like starting Gitlab after starting Redmine would work fine but starting them other way round would have this issue. But switching to Kubernetes and replacing the cross-service connections with network policies solved the issue for me.

[–] FooBarrington@lemmy.world 1 points 11 months ago (1 children)

An easy fix for this is to create individual networks for connections. I.e. don't create one network with Gitlab, Redmine and OpenLDAP - do two, one with Gitlab and OpenLDAP, and one with Redmine and OpenLDAP.

[–] akash_rawal@lemmy.world 1 points 11 months ago (1 children)

don't create one network with Gitlab, Redmine and OpenLDAP - do two, one with Gitlab and OpenLDAP, and one with Redmine and OpenLDAP.

This was the setup I had, but now I am already using kubernetes with no intention to switch back.

[–] FooBarrington@lemmy.world 1 points 11 months ago

Very understandable :)