this post was submitted on 31 Aug 2025
44 points (90.7% liked)

Privacy

41480 readers
444 users here now

A place to discuss privacy and freedom in the digital world.

Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.

In this community everyone is welcome to post links and discuss topics related to privacy.

Some Rules

Related communities

much thanks to @gary_host_laptop for the logo design :)

founded 5 years ago
MODERATORS
 

Yo yo!

I’ve been working on making my life more private and need some assistance picking suitable replacement options. Please let me know what you think of my list of if there are any opportunities for improvement! Here’s where I’m at …

Apple Maps -OSMandMaps. Seems like a good option, but it’s not ready out the box. I need to do more tweaking with it. -Magic Earth. Haven’t tested it yet, seems good. But I’m looking for free options first before I dabble with paid stuff.

AI (ChatGPT) -Lumo. Chat is really good. But I understand they are good because they syphon data illegally, so I’m ok “downgrading” when switching AIs. Lump seems pretty good so far. I can tell it’s not as advanced but it will do me fine for what I need. Also, i assume once I pay for lumo pro it will be more “powerful”. -Maple AI. Seems dope, also I like the pay model, pay for what you use over “x” amount of inquiries. Does anyone know how I owledgable/powerful it is? -local AI OR Ollama. These 2 are beyond my knowledge. I don’t understand how I run these on my own server? If you know anything about these please ELI5.

Google Docs -OnlyOffice. Seems like it does everything I want. -cryptpad. Just heard of this today, need to explore more. Seems dope, but it doesn’t have an app? From what I’ve seen definitely a strong contender.

Photo App (I haven’t looked into any of these yet) -Protón Drive. -ente photos. -I’mmich.

Google Drive -protón drive.

you are viewing a single comment's thread
view the rest of the comments
[–] mierdabird@lemmy.dbzer0.com 4 points 5 days ago (1 children)

Using Ollama depends a lot on the equipment you run - you should aim to have at least 12gb of VRAM/unified memory to run models. I have one copy running in a docker container using CPU on Linux and another running on the GPU of my windows desktop so I can give install advice for either OS if you'd like

[–] BlackSnack@lemmy.zip 3 points 5 days ago (2 children)

I definitely need some advice for self hosting! I literally have no idea what I’m doing. I have a raspberry pi and another user said that may be enough to get started.

Could you share some videos or links or blogs that explain how to get started?

[–] irmadlad@lemmy.world 2 points 4 days ago* (last edited 4 days ago) (1 children)

I definitely need some advice for self hosting!

Great SelfHosting resource: https://lemmy.world/c/selfhosted

I selfhost a lot of the services I use. It's cost effective and educational all at the same time. The RPI is a good point to deviate from. When you outgrow it, repurpose it into a Pi-Hole. Personal VPS servers are quite affordable if you know where to look. Do some poking around and be sure to ask some questions. We all were noobs at something at some point and all knowledge and wisdom starts with a single question.....so don't be afraid to ask it.

[–] BlackSnack@lemmy.zip 2 points 4 days ago (1 children)

Nice! Good sublemmy to follow! (Is sublemmy the right word)

Thanks for the tips! I just started playing around with ollama so I think the self hosting route is next.

[–] irmadlad@lemmy.world 1 points 3 days ago

(Is sublemmy the right word)

Never heard it before but it does sound appropriate.

[–] mierdabird@lemmy.dbzer0.com 1 points 4 days ago

So I googled it and if you have a Pi 5 with 8gb or 16gb of ram it is technically possible to run Ollama, but the speeds will be excruciatingly slow. My Nvidia 3060 12gb will run 14b (billion parameter) models typically around 11 tokens per second, this website shows a Pi 5 only runs an 8b model at 2 tokens per second - each query will literally take 5-10 minutes at that rate:
Pi 5 Deepseek
It also shows you can get a reasonable pace out of the 1.5b model but those are whittled down so much I don't believe they're really useful.

There are lots of lighter weight services you can host on a Pi though, I highly recommend an app called Cosmos Cloud, it's really an all-in-one solution to building your own self-hosted services - it has its own reverse proxy like Nginx or Traefik including Let's Encrypt security certificates, URL management, and incoming traffic security features; it has an excellent UI for managing docker containers and a large catalog of prepared docker compose files to spin up services with the click of a button; it has more advanced features you can grow into using like OpenID SSO manager, your own VPN, and disk management/backups.
It's still very important to read the documentation thoroughly and expect occasional troubleshooting will be necessary, but I found it far, far easier to get working than a previous Nginx/Docker/Portainer setup I used.