this post was submitted on 11 Nov 2025
18 points (100.0% liked)

Linux

59538 readers
1291 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 6 years ago
MODERATORS
 

This is my first real dive into hosting a server beyond a few Docker containers in my NAS. I've been learning a lot over the past 5 days, first thing I learned is that Proxmox isn't for me:

https://sh.itjust.works/post/49441546 https://sh.itjust.works/post/49272492 https://sh.itjust.works/post/49264890

So now I'm running headless Ubuntu and having a much better time! I migrated all of my Docker stuff to my new server, keeping my media on the NAS. I originally set up an NFS share (NAS->Server) so my Jellyfin container could snag the data. This worked at first, quickly crumbled without warning, and HWA may or may not be working.

Enter the Jellyfin issue: transcoded playback (and direct, doesn't matter) either give "fatal player error" or **extremely **slow, stuttery playback (basically unusable). Many Discord exchanges later, I added an SMB share (same source folder, same destination folder) to troubleshoot to no avail, and Jellyfin-specific problems have been ruled out.

After about 12hrs of 'sudo nano /etc/fstab' and 'dd if=/path/to/nfs_mount/testfile of=/dev/null bs=1M count=4096 status=progress', I've found some weird results from transferring the same 65GB file between different drives:

NAS's HDD (designated media drive) to NAS's SSD = 160MB/s NAS's SSD to Ubuntu's SSD = 160MB/s NAS's HDD to Ubuntu's SSD = .5MB/s

Both machines are cat7a ethernet straight to the router. I built the cables myself, tested them many times (including yesterday), and my reader says all cables involved are perfectly fine. I've rebooted them probably a fifty times by now.

NAS (Synology DS923+): -32GB RAM -Seagate EXOS X24 -Samsung SSD 990 EVO

Ubuntu: -Intel i5-13500 -Crucial DDR5-4800 2x32GB -WD SN850X NVMe

If you were tasked with troubleshooting a slow mount bind between these two machines, what would you do to improve the transfer speeds? Please note that I cannot SSH into the NAS, I just opened a ticket with Synology about it.

Here's the current /etc/fstab after extensive Q&A from different online communities

NFS mount: 192.168.0.4:/volume1/data /mnt/hermes nfs4 rw,nosuid,relatime,vers=4.1,rsize=13>

SMB mount: //192.168.0.4/data /mnt/hermes cifs username=_____,password=_______,vers=3.>

you are viewing a single comment's thread
view the rest of the comments
[–] LazerDickMcCheese@sh.itjust.works 2 points 2 days ago (1 children)

Interesting. I've been using Tailscale for years, this is the first I've heard of it causing LAN networking problems. I thought the purpose of Tailscale was to establish a low maintenance VPN for people who won't/can't set up a reverse proxy, especially for beginners like myself. Later today I'll try to clear it out and report back

[–] just_another_person@lemmy.world 6 points 2 days ago (1 children)

Tailscale is for point-to-ooint connections between locations, so yes a VPN. That doesn't mean two machines on a local network should be using it to talk to each other. Let me explain a bit:

Say you have two machines on two different networks 100 miles apart. You put those two on Tailscale, that virtual interface sends traffic through their servers and figures out the routing, and then they can talk to each other...cool.

Now move those two machines to the same network and what happens? Tailscale sends their traffic out of that same virtual interface and THEN brings it back into the network. Sure they can still talk to each other sort of, but you're just skipping using your local network. Doesn't make any sense.

This is because of "default routes". Whenever you plug a machine into network with a router, that router sends along information on where this machine needs to send it's traffic to get routed properly. Usually whatever your home router is. This is the default route.

Once you bring up the Tailscale interface without proper routing for your local networks taken into account, it sets your default route for Tailscale endpoints, meaning all of your traffic first goes out through Tailscale, and you get what you're seeing here.

Regardless of what you read around and on Reddit, Tailscale is not as simple as it seems, especially if you don't know networking basics. It's meant to be used with exit node endpoints that route to a larger number of machines to prevent issues like this, NOT as a client in every single machine you want to talk to each other. I see A LOT of foolish comments around here where people say they install it on all of their local machines, and they don't know what they are doing.

To my point: read this issue to see someone with similar problems, but make sure to read through the dupe issue linked for a longer discussion over the past number of years.

Extra thread here explaining some things.

This blog goes deeper into a possible solution for your setup.

The simplest solution for Linux is usually just making sure to NOT run Tailscaled as root, just as your local user. This should prevent the global override of your machines default routes in most cases, but not all.

The proper and more permanent solution is running Tailscale on your router and letting that single device act as an exit node and handle advertising the proper routes to clients through the DERP server translation.

Also, see the netcheck docs as it can help quickly debug if things are working properly or not.

[–] LazerDickMcCheese@sh.itjust.works 1 points 1 day ago (1 children)

Great answer, thank you. To your point, I tried to disable the Tailscale service on my Ubuntu machine and the consequences were bad enough that I'm going to try to avoid Tailscale as much as possible. In disabling it, it also shut down open-ssh, so I had go to the machine with a keyboard and monitor (gross). Re-ran iperf3...while still a bit lower than I'd expect, I don't think I have any room to complain here all things considered.

[–] just_another_person@lemmy.world 1 points 23 hours ago* (last edited 23 hours ago) (1 children)

Now it looks correct. If you have a Gigabit capable switch/router, and 100Mbps seems wrong, you should check your negotiated link speed on your Ethernet interface with something like ethtool [your_interface] | grep Speed

100Mbps is obviously low if you have a Gigabit router. Either way, you should have your jellyfin setup working without issue now.

[–] LazerDickMcCheese@sh.itjust.works 1 points 23 hours ago (2 children)

"Speed: 1000Mb/s". I was under the impression that my HDD (Seagate EXOS) would be roughly double that with some to spare.

Sad to report it is not working. Instead I'm getting a different error every time I try to play media

[–] ryannathans@aussie.zone 1 points 20 hours ago (1 children)

Does the other end also say 1000Mb/s? Something is limiting it to 100Mb/s

[–] LazerDickMcCheese@sh.itjust.works 1 points 19 hours ago (1 children)

"The other end"? As in my NAS? Because I can't check that machine due to lack of SSH

[–] ryannathans@aussie.zone 1 points 15 hours ago (1 children)

Does the hardware specification note a 100mb or gigabit port?

[–] LazerDickMcCheese@sh.itjust.works 1 points 11 hours ago (1 children)
[–] ryannathans@aussie.zone 1 points 4 hours ago

Irrelevant here, it's speed that's cooked

[–] just_another_person@lemmy.world 1 points 20 hours ago (1 children)

Depending on how this all was built and configured while Tailscale was running, you may need to take some steps to "undo" some things, like re-mounting your network mounts with the proper IPs (auto discovery may have messed things up).

What are the errors you're getting?

Oh sorry, nothing network related (as far as my novice ass can tell). I'm talking about my Jellyfin containers. Tons of excuses related to things that haven't changed