this post was submitted on 16 Jun 2025
111 points (94.4% liked)

Selfhosted

46674 readers
553 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I've tried coding and every one I've tried fails unless really, really basic small functions like what you learn as a newbie compared to say 4o mini that can spit out more sensible stuff that works.

I've tried explanations and they just regurgitate sentences that can be irrelevant, wrong, or get stuck in a loop.

So. what can I actually use a small LLM for? Which ones? I ask because I have an old laptop and the GPU can't really handle anything above 4B in a timely manner. 8B is about 1 t/s!

top 50 comments
sorted by: hot top controversial new old
[–] herseycokguzelolacak@lemmy.ml 3 points 1 day ago (1 children)

for coding tasks you need web search and RAG. It's not the size of the model that matters, since even the largest models find solutions online.

[–] catty@lemmy.world 1 points 1 day ago (1 children)

Any suggestions for solutions?

[–] herseycokguzelolacak@lemmy.ml 1 points 6 hours ago

Not on top of my head, but there must be something. llama.cpp and vllm have basically solved the inference problem for LLMs. What you need is a RAG solution on top that also combines it with web search.

[–] RickyRigatoni@retrolemmy.com 4 points 1 day ago

I have it roleplay scenarios with me and sometimes I verbally abuse it for fun.

[–] irmadlad@lemmy.world 7 points 1 day ago

As cool and neato as I find AI to be, I haven't really found a good use case for it in the selfhosting/homelabbing arena. Most of my equipment is ancient and lacking the GPU necessary to drive that bus.

[–] surph_ninja@lemmy.world 4 points 1 day ago

Learning/practice, and any use that feeds in sensitive data you want to keep on-prem.

Unless you’re set to retire within the next 5 years, the best reason is to keep your resume up to date with some hands-on experience. With the way they’re trying to shove AI into every possible application, there will be few (if any) industries untouched. If you don’t start now, you’re going to be playing catch up in a few years.

[–] HelloRoot@lemy.lol 39 points 2 days ago* (last edited 2 days ago)

Sorry, I am just gonne dump you some links from my bookmarks that were related and interesting to read, cause I am traveling and have to get up in a minute, but I've been interested in this topic for a while. All of the links discuss at least some usecases. For some reason microsoft is really into tiny models and made big breakthroughs there.

https://reddit.com/r/LocalLLaMA/comments/1cdrw7p/what_are_the_potential_uses_of_small_less_than_3b/

https://github.com/microsoft/BitNet

https://www.microsoft.com/en-us/research/blog/phi-2-the-surprising-power-of-small-language-models/

https://news.microsoft.com/source/features/ai/the-phi-3-small-language-models-with-big-potential/

https://techcommunity.microsoft.com/blog/aiplatformblog/introducing-phi-4-microsoft%E2%80%99s-newest-small-language-model-specializing-in-comple/4357090

[–] some_guy@lemmy.sdf.org 16 points 2 days ago (1 children)

I installed Llama. I've not found any use for it. I mean, I've asked it for a recipe because recipe websites suck, but that's about it.

[–] GreenKnight23@lemmy.world 42 points 2 days ago

you can do a lot with it.

I heated my office with it this past winter.

[–] iii@mander.xyz 23 points 2 days ago (2 children)

Converting free text to standardized forms such as json

load more comments (2 replies)
[–] ikidd@lemmy.world 9 points 2 days ago (4 children)

It'll work for quick bash scripts and one-off things like that. But there's not usually enough context window unless you're using a 24G GPU or such.

[–] catty@lemmy.world 3 points 1 day ago

Yeah shell scripts are one of those things that you never remember how to do something and have to always look it up!

load more comments (3 replies)
[–] MTK@lemmy.world 12 points 2 days ago (1 children)

Have you tried RAG? I believe that they are actually pretty good for searching and compiling content from RAG.

So in theory you could have it connect to all of you local documents and use it for quick questions. Or maybe connected to your signal/whatsapp/sms chat history to ask questions about past conversations

[–] catty@lemmy.world 4 points 2 days ago (1 children)

No, what is it? How do I try it?

[–] MTK@lemmy.world 13 points 2 days ago (1 children)

RAG is basically like telling an LLM "look here for more info before you answer" so it can check out local documents to give an answer that is more relevant to you.

You just search "open web ui rag" and find plenty kf explanations and tutorials

[–] iii@mander.xyz 3 points 2 days ago* (last edited 2 days ago) (1 children)

I think RAG will be surpassed by LLMs in a loop with tool calling (aka agents), with search being one of the tools.

[–] interdimensionalmeme@lemmy.ml 4 points 2 days ago

LLMs that train LoRas on the fly then query themselves with the LoRa applied

[–] swelter_spark@reddthat.com 6 points 2 days ago

7b is the smallest I've found useful. I'd try a smaller quant before going lower, if I had super small vram.

[–] entwine413@lemm.ee 14 points 2 days ago* (last edited 2 days ago) (6 children)

I've integrated mine into Home Assistant, which makes it easier to use their voice commands.

I haven't done a ton with it yet besides set it up, though, since I'm still getting proxmox configured on my gaming rig.

load more comments (6 replies)
[–] Mordikan@kbin.earth 9 points 2 days ago (1 children)

I've used smollm2:135m for projects in DBeaver building larger queries. The box it runs on is Intel HD 530 graphics with an old i5-6500T processor. Doesn't seem to really stress the CPU.

UPDATE: I apologize to the downvoter for not masochistically wanting to build a 1000 line bulk insert statement by hand.

[–] HiTekRedNek@lemmy.world 2 points 1 day ago (1 children)

How, exactly, do you have Intel HD graphics, found on Intel APUs, on a Ryzen AMD system?

[–] Mordikan@kbin.earth 1 points 1 day ago

Sorry, I was trying to find parts for my daughter's machine while doing this (cheap Minecraft build). I corrected my comment.

[–] ragingHungryPanda@lemmy.zip 4 points 2 days ago (2 children)

I've run a few models that I could on my GPU. I don't think the smaller models are really good enough. They can do stuff, sure, but to get anything out of it, I think you need the larger models.

They can be used for basic things, though. There are coder specific models you can look at. Deepseek and qwen coder are some popular ones

[–] catty@lemmy.world 1 points 1 day ago

I haven't actually found the coder-specific ones to be much (if at all) better than the generic ones. I wish I could have. Hopefully LLMs can become more efficient in the very near future.

load more comments (1 replies)
load more comments
view more: next ›