this post was submitted on 02 Jul 2025
271 points (94.4% liked)

Fuck AI

3522 readers
798 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Blaster_M@lemmy.world 4 points 2 weeks ago (1 children)

LocalLMs run pretty good on a 12GB RTX 2060. They're pretty cheap, if a bit rare now.

[–] Valmond@lemmy.world -1 points 2 weeks ago (1 children)

So 12GB is what you need?

Asking because my 4GB card clearly doesn't cut it 🙍🏼‍♀️

[–] Blaster_M@lemmy.world 0 points 2 weeks ago (1 children)

4GB card can run smol models, bigger ones require an nvidia and lots of system RAM, and performance will be proportionally worse by VRAM / DRAM usage balance.

[–] theunknownmuncher@lemmy.world 3 points 2 weeks ago

require an nvidia

Big models work great on macbooks or AMD GPUs or AMD APUs with unified memory