Selfhosted

37779 readers
198 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
1
 
 

I'm in desparate need of setting up borgmatic for borg backup. I would like to encrypt my backups. (I suppose, an unencrypted backup is better than none in my case, so I should get it done today regardless.)

How do I save those keys? Is there a directory structure I follow? Do you backup the keys as well? Are there keys that I need to write down by hand? Should I use a cloud service like bitwarden secrets manager? Could I host something?

Im ignorant on this matter. The most I've done is add ssh keys to git forges and use ssh-copyid. But I've always been able to access what I need to without keeping those (I login to the web interface.) Can you share with me best practices or what you do to manage non-password secrets?

2
 
 

Hello! I have jellyfin+qbittorrent+radarr on my home server, but I can't make it work with hardlinks. When a download finishes, it just copies it to the /movie folder, doubling the disk space. at least, I think that it's just a copy, because the disk space is double and find ./downloads -samefile ./movies/path/to/file.mkv returns no result, meaning if I understand correctly that file.mkv is not hardlinked to any file in the download folder (but it should).

this is the docker compose:

radarr:
    image: lscr.io/linuxserver/radarr:latest
    container_name: radarr
    network_mode: container:gluetun
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Europe/Rome
    volumes:
      - ./radarr-config:/config
      - /media/HDD1/movies:/movies
      - /media/HDD1/downloads:/downloads
    restart: unless-stopped

HDD1 hard drive is formatted ext4, that supports hardlinks (in fact I can create them manually), and in the radarr settings the checkbox "use hardlinks instead of copy" is checked.

Ideally I'd prefer softlinks instead of hadlinks, but I don't think there's a way to do it natively, I think I'd need an external script

Any tips? Thanks in advance!

3
 
 

I’ve been using the CarFAX Car Care app/website for a long time but I’m looking for something better.

It would be nice to have something I can enter my car make/model into and have it suggest maintenance but also keep track of repairs. I like uploading PDF scans of receipts too; one thing that always bothered me about Car Care is the horrible, weird compression it does on those files.

4
 
 

cross-posted from: https://lemm.ee/post/36285077

I have many ebooks I have from scouring the Internet in two formats: epub and PDF. I want something server like that lets me drop read them from any device on my local network and remembers where I left the book on device and let's me continue on another. I want the client app to have android and Linux support while the server should run on linux. Is there anything out there? Bonus points if it autographs metadata from the internet and organises them by topics, authors, ddc etc.

TLDR: An ebook library running on a Linux server with Android and Linux client software.

5
22
submitted 2 weeks ago* (last edited 2 weeks ago) by Sibbo@sopuli.xyz to c/selfhosted@lemmy.world
 
 

Should be easy to use, remember what I bought before and propose things that are probably running out (based on my personal buying frequency), and allow sharing the list between multiple people. Ideally also allow adding recipes for meals that I cook often.

6
11
unattended upgrades with caddy (bookwormstory.social)
submitted 2 weeks ago* (last edited 2 weeks ago) by Deemo@bookwormstory.social to c/selfhosted@lemmy.world
 
 

Edit: credit to exu@feditown.com

Assuming you installed caddy via Debian, Ubuntu, Raspbian method

https://caddyserver.com/docs/install#debian-ubuntu-raspbian

add "cloudsmith/caddy/stable:any-version"; to /etc/apt/apt.conf.d/50unattended-upgrades

Example:

// Automatically upgrade packages from these (origin:archive) pairs
//
// Note that in Ubuntu security updates may pull in new dependencies
// from non-security sources (e.g. chromium). By allowing the release
// pocket these get automatically pulled in.
Unattended-Upgrade::Allowed-Origins {
        "${distro_id}:${distro_codename}";
        "${distro_id}:${distro_codename}-security";
        // Extended Security Maintenance; doesn't necessarily exist for
        // every release and this system may not have it installed, but if
        // available, the policy for updates is such that unattended-upgrades
        // should also install from here by default.
        "${distro_id}ESMApps:${distro_codename}-apps-security";
        "${distro_id}ESM:${distro_codename}-infra-security";
        "${distro_id}:${distro_codename}-updates";
//      "${distro_id}:${distro_codename}-proposed";
//      "${distro_id}:${distro_codename}-backports";
        "cloudsmith/caddy/stable:any-version";
};

Link to comment chain (not sure how to add links in a federated way)

https://feditown.com/comment/1221458

https://bookwormstory.social/post/2100056/4136035

Origional post:

Hi guys anyone know how to use un attended upgrades with caddy.

I have ubuntu server 22.0.4.

The part that stumps me is caddy uses a external repository cloud Smith making ot difficult to setup.

I installed caddy via Debian, Ubuntu, Raspbian

https://caddyserver.com/docs/install#debian-ubuntu-raspbian

The closest example I could find to unattended upgrades with a external repo was this example using docker.

/etc/apt/apt.conf.d/50unattended-upgrades

"Docker:${distro_codename}";

https://blog.coffeebeans.at/archives/1299

I'm not sure if it's as simple as

/etc/apt/apt.conf.d/50unattended-upgrades

"Caddy:${distro_codename}";

Edit:

One more question affect would adding

APT::Unattended-Upgrade::Package-Blacklist "";

/etc/apt/apt.conf.d/20auto-upgrades

have?

Edit2:

I just removed this I only found this from google gemini (which probably isn't the best source of info)

APT::Unattended-Upgrade::Package-Blacklist "";
7
 
 

cross-posted from: https://lemmy.ml/post/17628115

A really nice project which provide charts to display Linux server status and tools to manage server.

I was using DaRemote only available on Google Play Store, to do that. Recently there was an option to download it and pay it directly to the dev.

ServerBox is really awesome, in 3 minutes it convince me, open-source, secure access with biometric, select a font, etc...

8
 
 

I had everything working fine for a while, and suddenly all my indexes have stopped working I get the error : " unable to connect to indexer. Connection refused (127.0.0.1:8080)"

The 127.0.0.1:8080 is not the address where my CasaOS is I don't know why it want to connect to that one or if it has something to do with the error. As I said it was working before like for 2 months I didn't change anything in the setting.

9
 
 

I want to host some engine, but don't know which. I know about searxng and 4get, for example, but I know that there are a lot of other search engines. Here is question: how to pick one and by what criteria?

10
 
 

I've been ripping my anime bluray collection and wanted to have an easier way to sort it for Jellyfin, so I wanted to try Shoko Server, but it's not recognizing any of my anime. It sees the actual files, but categorizes them all as Unrecognized, making the entire idea of using it for automated sorting pointless. I'm struggling to find guides on this and the documentation is quite lacking. I don't know what I'm wrong. Are there certain rules I need to be following in order for Shoko to hash correctly? Does it hash the name? The actual ripped files?

My folder structure is setup in a way that Jellyfin properly recognizes it (without using the Shoko plugin yet), so like so for example:

- Fate/stay night: ubw (2014)
---- Season 01
---------- <episode> S01E01
- Fate/stay night: ubw (2015)
---- Season 01
---------- you get the idea

Since multi season anime often are separate entries, each season is usually its own main folder (which is one of the reasons I wanted to try Shoko to see if I could combine them into one so that I don´t have multiple entries for what is really only 1 anime series).

Anyone here that uses Shoko and have some tips?

11
 
 

Hello! I was wondering if running periodically a script to automatically pull new images for all my containers is a good or a bad idea. I'd run it everyday at 5.00AM to avoid interruptions. Any tips?

EDIT: Thanks to everyone for the help! I'll install Watchtower to manage the updates

12
 
 

Hi,

I've been playing with a Dell mini PC (OptiPlex 7070) that I set up with Proxmox and a single Debian virtual machine that hosts a bunch of containers (mostly an *arr stack).

All the data resides on the single SSD that came with the machine, but I'm now satisfied with the whole ordeal and would like to migrate my storage from my PC to this solution.

What's the best approach software side? I have a bunch of HD in of varying size and age (therefore expected reliability) and I'd initially dedicate such storage to data I can 100% afford to lose (basically media).

I read I should avoid USB (even though my mini PC exposes a USB-C) for reliability, but on the other hand I'm not sure what other options I have that doesn't force me to buy a NAS or properly sized HD to install inside the machine...

Also, what's a good filesystem for my usecase?

Thank for any tips.

13
 
 

Currently, I have two VPN clients on most of my devices:

  • One for connecting to a LAN
  • One commercial VPN for privacy reasons

I usually stay connected to the commercial VPN on all my devices, unless I need to access something on that LAN.

This setup has a few drawbacks:

  • Most commercial VPN providers have a limit on the number of simulations connected clients
  • I either obfuscate my IP or am able to access resources on that LAN, including my Pi-Hole fur custom DNS-based blocking

One possible solution for this would be to route all internet traffic through a VPN client on the router in the LAN and figuring out how to still be able to at least have a port open for the VPN docker container allowing access to the LAN. But then the ability to split tunnel around that would be pretty hard to achieve.

I want to be able to connect to a VPN host container on the LAN, which in turn routes all internet traffic through another VPN client container while allowing LAN traffic, but still be able to split tunnel specific applications on my Android/Linux/iOS devices.

Basically this:

   +---------------------+ internet traffic   +--------------------+           
   |                     | remote LAN traffic |                    |           
   | Client              |------------------->|VPN Host Container  |           
   | (Android/iOS/Linux) |                    |in remote LAN       |           
   |                     |                    |                    |           
   +---------------------+                    +--------------------+           
                      |                         |     |                        
                      |       remote LAN traffic|     | internet traffic       
split tunneled traffic|                 |--------     |                        
                      |                 |             v                        
                      v                 |         +---------------------------+
  +---------------------+               v         |                           |
  | regular LAN or      |     +-----------+       | VPN Client Container      |
  | internet connection |     |remote LAN |       | connects to commercial VPN|
  +---------------------+     +-----------+       |                           |
                                                  |                           |
                                                  +---------------------------+

Any recommendations on how to achieve this, especially considering client apps for Android and iOS with the ability to split tunnel per application?

Update:

Got it by following this guide.

14
 
 

This is a part from an IBM server dated 2008 that I want to reuse in my new computer. It essentially converts from 1 SAS port to 4 SATA ports. I’ll use the raid card to connect to it via SAS, but I do not know what the power port is and what the connector on the top is either

15
33
submitted 2 weeks ago* (last edited 2 weeks ago) by tootnbuns@lemmy.dbzer0.com to c/selfhosted@lemmy.world
 
 

Serverbox Github Link

Looking for a convenient overview of your servers?

Randomly found this app on F-Droid and I am blown away.

It fetches the server stats, even drive usage and makes it super easy to open an sftp browser or even a ssh console if you quickly need to.

deep recommendation

16
 
 

I've been using the Firefox docker container through the gluetun docker container (runs great with proton and mullvad) and it's been really great.

To me it's kind of like a less restricted tor browser, for when you need something stronger in terms of speed or IP blocking. And maybe something more persistent.

And it always stays open even when you close your connection.

Some of my use cases are:

  • Anonymously downloading larger files through the clearnet.

  • Anonymous ChatGPT usage.

  • Manually looking for torrent magnet links (though I usually do that with the tor browser)

  • Accessing shadow libraries

17
 
 

Normally my *arr -> Plex setup is quite painless, but lately I've had a bunch of imports failing which appear to be multi-part files with a .MKV file in a "sample" subdirectory.

Does anyone know if I can sort it so these files import properly? Or how to filter them before downloading? I'd rather a fix if possible because certain torrents don't have many options.

18
29
submitted 2 weeks ago* (last edited 2 weeks ago) by Hellmo_Luciferrari@lemm.ee to c/selfhosted@lemmy.world
 
 

Hi all!

So I want to get back into self hosting, but every time I have stopped is because I have lack of documentation to fix things that break. So I pose a question, how do you all go about keeping your setup documented? What programs do you use?

I have leaning towards open source software, so things like OneNote, or anything Microsoft are out of the question.


Edit: I didn't want to add another post and annoy people, but had another inquiry:

What ReverseProxy do you use? I plan to run a bunch of services from docker, and would like to be able to reserve an IP:Port to something like service.mylocaldomain.lan

I already have Unbound setup on my PiHole, so I have the ability to set DNS records internally.

Bonus points if whatever ReverseProxy setup can accomplish SSL cert automation.

19
 
 

So lemmiverse, my https://pxtl.ca domain has officially been booted off of Google Domains (welcome to the Google graveyard, Google Domains) and now has been moved into Squarespace, which is expensive.

Anybody recommend a good cheap .ca TLD domain host? One with a decent API for dynamic DNS so I can keep my home subdomain? I have a couple of pi4 servers in the house that could be tasked with pinging an API endpoint to notify the domain host of my IP.

thanks in advance.

20
 
 

Pro: 1Gb upload and download speeds on free Internet provided by the HOA. Con: As a self hoster, I have zero control over it. No port forwarding, no DMZ, no bridge mode. It's Starbucks free WiFi with a wired connection.

Option A: Buy Google Fiber and don't use free Internet. Option B: Create some elaborate tunnel through a VPS.

My public self hosted activities are fairly low bandwidth (password manager, SSH). I have a vague idea that I could point my domain to a low cost VPS that has a VPN tunnel into my home network for any incoming connection needs. That may require me to fill in port forwards on both systems but whatever. Tailscale is serving most of my remote needs but I still need a few ports. This does not fix the issue of online gaming port forwards (Nintendo Switch online requires a huge forwarded range for best performance) but oh well for now.

UPDATE: I think they're using this system. https://www.cambiumnetworks.com/markets/multi-family-living/ The personal Wi-Fi overview makes it clear each AP is given it's own VLAN which sounds a whole lot like the whole building is sharing one IP and there's no way I'm going to get my own Internet access. They even detail how you can roam the building and maintain your WiFi connection across your neighbor's and the common areas APs. This is the IPV4 future.

21
43
submitted 2 weeks ago* (last edited 2 weeks ago) by Cyber@feddit.uk to c/selfhosted@lemmy.world
 
 

As a long-term MythTV user, I read all the discussion about Plex vs Jellyfin, but I'm still here... recording Live TV, watching films, listening to "me choonz" all on free, open-source software. What am I missing? Any other MythTV users out there?

22
 
 

Setting up a Synology server, I made the mistake of just buying a UPS that had a USB plug on the back thinking oh this is a solved problem, it must just work. No no far from it.

So the UPS I mistakenly purchased is not compatible with Synology. SRV1KI-E wants to run this weird program called PowerChute.

Anyone have success marrying this into the Synology ecosystem?

It also has a RS 232 serial port, I wonder if there's an off-the-shelf device that would speak serial but output power state via the network or USB.

23
 
 

Check out our open-source, language-agnostic mutation testing tool using LLM agents here: https://github.com/codeintegrity-ai/mutahunter

Mutation testing is a way to verify the effectiveness of your test cases. It involves creating small changes, or “mutants,” in the code and checking if the test cases can catch these changes. Unlike line coverage, which only tells you how much of the code has been executed, mutation testing tells you how well it’s been tested. We all know line coverage is BS.

That’s where Mutahunter comes in. We leverage LLM models to inject context-aware faults into your codebase. As the first AI-based mutation testing tool, Our AI-driven approach provides a full contextual understanding of the entire codebase by using the AST, enabling it to identify and inject mutations that closely resemble real vulnerabilities. This ensures comprehensive and effective testing, significantly enhancing software security and quality. We also make use of LiteLLM, so we support all major self-hosted LLM models

We’ve added examples for JavaScript, Python, and Go (see /examples). It can theoretically work with any programming language that provides a coverage report in Cobertura XML format (more supported soon) and has a language grammar available in TreeSitter.

Here’s a YouTube video with an in-depth explanation: https://www.youtube.com/watch?v=8h4zpeK6LOA

Here’s our blog with more details: https://medium.com/codeintegrity-engineering/transforming-qa-mutahunter-and-the-power-of-llm-enhanced-mutation-testing-18c1ea19add8

Check it out and let us know what you think! We’re excited to get feedback from the community and help developers everywhere improve their code quality.

24
 
 

So this is an interesting one I can't figure out myself. I have Proxmox on a PowerEdge R730 with 5 NICs (4 + management). The management interface is doing its own thing so don't worry about that. Currently I have all 4 other interfaces bonded and bridged to a single IP. This IP is for my internal network (192.168.1.0/24, VLAN 1). This has been working great. I have no issues with any containers on this network. One of those containers happens to be one of two FreeIPA replicas, the other living in the cloud. I have had no issues using DNS or anything else for FreeIPA from this internal network nor from my cloud network or VPN networks.

Now, I finally have some stuff I want to toss in my DMZ network (192.168.5.0/24, VLAN 5) and so I'll just use my nice R730 to do so, right? Nope! I can get internet, I can even use the DNS server normally, but the second I go near my FreeIPA domains it all falls apart. For instance, I can get the records for example.local just fine, but the second i request ipa.example.local or ds.ipa.example.local, i get EDE 22: No Reachable Authority. This is despite the server that's being requested from being the authority for this zone. I can query the same internal DNS server from either the same internal network or a different network and it works handy dandy, but not from the R730 on another network. I can't even see the NS glue records on my public DNS root server.

I'm honestly not sure why everything except these FreeIPA domains works. Yes, I have the firewall open for it and I have added a trusted_networks ACL to Bind and allowed queries, recursion, and query_cache for this ACL. The fact it only breaks on these FreeIPA subdomains makes me think it's a forwarding issue, but shouldn't it see the NS records and keep going? It can ping all the addresses that might come up from DNS, it's showing the same SOA when I query the root domain, it just refuses to work from my IPA domains. Can someone provide any insight on this please, I'm sick and tired of trying to debug it.

25
135
submitted 2 weeks ago* (last edited 2 weeks ago) by oranki@lemmy.world to c/selfhosted@lemmy.world
 
 

cross-posted from: https://lemmy.world/post/17087912

Protonmail relies solely on Firebase for receiving notifications on Android. While UniversalPush support is probably in the works, it may take some time until users on ROMs without GSF get built-in notifications.

For those that already use ntfy.sh as a push provider for other apps, https://github.com/0ranki/hydroxide-push is a solution to get push notifications of new mail in Inbox.

The service requires a Linux box to run on, and can be deployed as a container or by running the provided binary. Building from source is of course also an option.

The service is a stripped down version of Hydroxide, the FOSS Protonmail Bridge alternative. There are no ports exposed, all communication is outwards. Communications to Proton servers use the Proton API. The service only receives events from Proton servers, and if the event is incoming mail, a notification is sent to a ntfy.sh server and topic of your choice. Other types of events are simply disregarded, and no other processing is done. The sent push event does not contain any detailed information.

EDIT: Starting from version v0.28.8-push7 the daemon supports HTTP basic auth for the push endpoint.

Disclaimer: I'm the author. All of the work is thanks to https://github.com/emersion/hydroxide, I've merely mutilized the great upstream project of most features for a single purpose. Issues, comments and pull requests are welcome!

view more: next ›