toxuin

joined 2 years ago
[–] toxuin@lemmy.ca 14 points 1 day ago (1 children)

How to be innocent (not really): Step 1. Insist that users absolutely MUST use your cloud server for every login into a self-hosted tool on their own hardware. Step 2. Have shit security. Step 3. Ow wow, now users’ data is all over your systems! Hackers clap their hands and do a happy dance. Step 4. Send out “we’re sowwy” email.

[–] toxuin@lemmy.ca 53 points 1 day ago (1 children)

Keep in mind that the only reason they deny you the ability to log in to your own local service with your own local sign-in method is that they may upsell you on their cloud junk. If there’d be no cloud account involved - your data would not be at risk and/or leaked. They endangered your privacy for marketing purposes.

If you have not moved off of Plex - do it now. This company is fully rotten.

The email they sent out has reply-to address that conveniently does not work…

[–] toxuin@lemmy.ca 25 points 9 months ago

It’s not the model that would be sending telemetry - it’s the runtime that you load it up in. Ollama is open source wrapper for llama.cpp, so (if you have enough patience) you could inspect the source code to be sure. Regarding running it in sandbox: you could, and generally it does not add any tangible overhead to the tokens per second performance, but keep in mind that in order to give the model runtime (ollama, vllm and the like) access to your GPU you usually need some form of sandbox concessions like PCIe passthrough for VMs or running nvidia’s proprietary container runtime plugin. From my measurements, there is zero difference in performance when running a model loaded in a GPU on baremetal, a docker container with nvidia container runtime or a proxmox VM with PCIe passthrough. Model executes on GPU itself and barely uses any CPU (sampling and loras are usually CPU operations). vLLM does collect anonymized usage stats. Since it’s open source - you can actually see what’s being sent (spoiler: it’s pretty boring). As far as I know, Ollama has nothing like that. None of the open source engines that I know of are sending your full prompts or responses anywhere though. It doesn’t mean they will keep being like that forever or that you should be less vigilant though 👍