this post was submitted on 01 Jul 2023
40 points (91.7% liked)

Lemmy.World Announcements

29098 readers
12 users here now

This Community is intended for posts about the Lemmy.world server by the admins.

Follow us for server news ๐Ÿ˜

Outages ๐Ÿ”ฅ

https://status.lemmy.world

For support with issues at Lemmy.world, go to the Lemmy.world Support community.

Support e-mail

Any support requests are best sent to info@lemmy.world e-mail.

Report contact

Donations ๐Ÿ’—

If you would like to make a donation to support the cost of running this platform, please do so at the following donation URLs.

If you can, please use / switch to Ko-Fi, it has the lowest fees for us

Ko-Fi (Donate)

Bunq (Donate)

Open Collective backers and sponsors

Patreon

Join the team

founded 2 years ago
MODERATORS
 

Looks like it works.

Edit still see some performance issues. Needs more troubleshooting

Update: Registrations re-opened We encountered a bug where people could not log in, see https://github.com/LemmyNet/lemmy/issues/3422#issuecomment-1616112264 . As a workaround we opened registrations.

Thanks

First of all, I would like to thank the Lemmy.world team and the 2 admins of other servers @stanford@discuss.as200950.com and @sunaurus@lemm.ee for their help! We did some thorough troubleshooting to get this working!

The upgrade

The upgrade itself isn't too hard. Create a backup, and then change the image names in the docker-compose.yml and restart.

But, like the first 2 tries, after a few minutes the site started getting slow until it stopped responding. Then the troubleshooting started.

The solutions

What I had noticed previously, is that the lemmy container could reach around 1500% CPU usage, above that the site got slow. Which is weird, because the server has 64 threads, so 6400% should be the max. So we tried what @sunaurus@lemm.ee had suggested before: we created extra lemmy containers to spread the load. (And extra lemmy-ui containers). And used nginx to load balance between them.

Et voilร . That seems to work.

Also, as suggested by him, we start the lemmy containers with the scheduler disabled, and have 1 extra lemmy running with the scheduler enabled, unused for other stuff.

There will be room for improvement, and probably new bugs, but we're very happy lemmy.world is now at 0.18.1-rc. This fixes a lot of bugs.

all 3 comments
sorted by: hot top controversial new old
[โ€“] tmpod@lemmy.pt 1 points 1 year ago

Awesome work!

I'd like to know more about the exact container topology you have, since I may try something similar on my instance as well.
Is it something like this?

โ”Œโ”€โ”€โ”€โ”       โ”Œโ”€โ”€โ”€โ”
โ”‚WEBโ”‚       โ”‚WEBโ”‚
โ””โ”€โ”ฌโ”€โ”˜       โ””โ”€โ”ฌโ”€โ”˜
โ”Œโ”€โ”ดโ”€โ” โ”Œโ”€โ”€โ”€โ” โ”Œโ”€โ”ดโ”€โ”
โ”‚BE โ”œโ”€โ”คIMGโ”œโ”€โ”คBE โ”‚
โ””โ”€โ”ฌโ”€โ”˜ โ”œโ”€โ”€โ”€โ”ค โ””โ”€โ”ฌโ”€โ”˜
  โ””โ”€โ”€โ”€โ”คDB โ”œโ”€โ”€โ”€โ”˜  
      โ””โ”€โ”€โ”€โ”˜      

Thank you! :3

[โ€“] BitOneZero@lemmy.world 1 points 1 year ago

Edit still see some performance issues. Needs more troubleshooting

Federation overheard is putting a lot of load on servers. Creating one task for every single post, comment, and vote in RAM-only queue.... pending changes: https://github.com/LemmyNet/lemmy/pull/3466