this post was submitted on 05 Nov 2025
571 points (99.0% liked)

Technology

76670 readers
1830 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

"I've been saving for months to get the Corsair Dominator 64GB CL30 kit," one beleagured PC builder wrote on Reddit. "It was about $280 when I looked," said u/RaidriarT, "Fast forward today on PCPartPicker, they want $547 for the same kit? A nearly 100% increase in a couple months?"

you are viewing a single comment's thread
view the rest of the comments
[–] brucethemoose@lemmy.world 26 points 3 days ago* (last edited 3 days ago) (4 children)

I just got a 2x64GB 6000 kit before its price skyrocketed by like $130. I saw other kits going up, but had no clue I timed it so well.

...Also, why does "AI" need so much CPU RAM?

In actual server deployments, pretty much all inference work is done in VRAM (read: HBM/GDDR); they could get by with almost no system RAM. And honestly most businesses are too dumb to train anything that extensively. ASICs that would use, say, LPDDR are super rare, and stuff like Hybrid/IGP inference is the realm of a few random folks with homelabs... Like me.

I think 'AI' might be an overly broad term for general server buildout.

[–] tty5@lemmy.world 20 points 3 days ago* (last edited 3 days ago) (1 children)

Same memory production capacity can be allocated to ddr5 or to hbm and openai signed contracts with sk hynix and samsung, the two largest ram manufacturers in the world, and bought a significant percentage of next year's production.

DDR5 prices started spiking as that deals impact propagated through the supply chain. I bought a 2x32 6800 Cl30 kit for 195 euro 12 days ago. It was 330 euro 4 days later.

[–] brucethemoose@lemmy.world 4 points 3 days ago* (last edited 3 days ago) (1 children)

...Is it that interchangeable?

TBH I know little of memory fabs and HBM ICs, but I know (say) TSMC can't just switch from a power-optimized process to a high frequency one at the drop of a hat.

[–] tty5@lemmy.world 8 points 3 days ago (1 children)

Slightly different part, same process. The bigger bottleneck is packaging - HBM is 3d stacked.

[–] brucethemoose@lemmy.world 3 points 3 days ago (1 children)

Ah. Yeah. And its on the fab to do that.

I always though it'd be cool for CPUs to switch to packaged RAM, too. Samsung apparently tried to do it with Wide I/O for mobile ARM stuff, but it never caught on.

[–] frezik@lemmy.blahaj.zone 1 points 3 days ago (1 children)

If I'm following what you mean by packaged RAM, Apple does that. It's fast, but you can't upgrade it.

[–] brucethemoose@lemmy.world 2 points 3 days ago* (last edited 3 days ago)

That's (as I understand it) a misconception.

Apple attaches their laptop RAM the same way all smartphones do. It's a wide bus with LPDDR, which makes it an unusual configuration amongst laptops, but it's technically conventional. And relatively cheap.

AMD's Strix Halo chips are the same. Apple could use LPCAMM to make the memory upgradable if they wanted, they just... don't.

When we talk 'packaging', we're talking putting chips on advanced substrates with denser wires than one could possibly get on a motherboard (or a 'mini' motherboard which is kinda what Apple/smartphone RAM is packaged on), stuff silicon fabs have to do:

https://www.tsmc.com/english/dedicatedFoundry/services/advanced-packaging

And HBM falls into this bucket. The way its hooked up to the processor is physically different than PC RAM sticks, or Apple's RAM. This is mostly not done on consumer stuff because its very expensive, and most of TSMC's advanced packaging production capacity is reserved for server stuff.

[–] Kissaki@feddit.org 5 points 3 days ago* (last edited 3 days ago) (1 children)

I suspect RAM may become increasingly useful with the shift from pure chat LLM to connected agents, MCP, and catching results and data for scaling things like public Internet search and services.

When I think of database system server software, a lot of performance gains are from keeping used data in RAM. With the expanding of LLM systems and it's concerns, backing data, connective ness, and need for optimisation, a shift to caching and keeping in RAM seems to suggest itself. It's already wasteful/big and operates on a lot of data, so it seems plausible that would not be a small cache.

[–] brucethemoose@lemmy.world 1 points 3 days ago

Yeah, exactly... In other words, 'general server buildout.'

[–] just_an_average_joe@lemmy.dbzer0.com 8 points 3 days ago (1 children)

There was a recentish model, qwen next that was advertised as smth that can be run entirely on RAM.

[–] brucethemoose@lemmy.world 16 points 3 days ago* (last edited 3 days ago) (1 children)

They can ALL be run on RAM, theoretically. I bought 128GB so I can run GLM 4.5 with the experts offloaded to CPU, with a custom trellis/K quant mix; but this is a 'personal use' tinkerer setup basically no one but hobbyists will touch.

Qwen Next is good at that because its very low active parameter.

...But they aren't actually deployed that way. They're basically always deployed on cloud GPU boxes that serve dozens/hundreds of people at once, in parallel.

AFAIK the only major model actually developed for CPU inference is one of the esoteric Gemma releases, aimed at mobile. And the bitnet experiments, which aren't very big so far.

(In case it's not obvious, this is my special interest, and I'm happy to ramble on about how to set up 'niche gaming rig hybrid models' for anyone interested).

[–] Passerby6497@lemmy.world 4 points 3 days ago (3 children)

I for one would enjoy triggering your unskippable cutscenes in setting up local CPU based AI if it can work on Linux with an older amd card.

Don't have funds for anything fancy, but would be interesting in playing around with it. Been wanting to get something like that setup for home assistant.

[–] brucethemoose@lemmy.world 4 points 3 days ago* (last edited 3 days ago) (2 children)

Plenty of folks do AMD. A popular homelabsetup is 32GB AMD MI50 GPUs, which are quite cheap on eBay. Even Intel is fine these days!

But what's your setup, precisely? CPU, RAM, and GPU.

[–] Passerby6497@lemmy.world 1 points 2 days ago (1 children)

Looks like I'm running an AMD Ryzen 5 2600 CPU, AMD Radeon RX 570 GPU, and 32GB RAM

[–] brucethemoose@lemmy.world 1 points 2 days ago* (last edited 2 days ago) (1 children)

4GB VRAM

Mmmmm... I would wait a few days, and try a GGUF quantization of Kimi Linear once its better supported: https://huggingface.co/moonshotai/Kimi-Linear-48B-A3B-Instruct

Otherwise you can mess with Qwen 3 VL now, in the native llama.cpp UI. But be aware that Qwen is pretty sycophantic like ChatGPT: https://huggingface.co/unsloth/Qwen3-VL-30B-A3B-Instruct-GGUF/blob/main/Qwen3-VL-30B-A3B-Instruct-UD-Q4_K_XL.gguf

If you're interested, I can work out an optimal launch command. But to be blunt, with that setup, you're kinda better off using free LLM APIs with a local chat UI.

[–] Passerby6497@lemmy.world 1 points 1 day ago

Thanks for the info. I would like to run locally if possible, but I'm not opposed to using API and just limiting what I surface.

[–] afk_strats@lemmy.world 2 points 3 days ago

I have a MI50/7900xtx gaming/ai setup at homr which in i use for learning and to test out different models. Happy to answer questions

[–] ag10n@lemmy.world 3 points 3 days ago (2 children)
[–] Passerby6497@lemmy.world 1 points 1 day ago

Only got 4G vram, unfortunately

[–] brucethemoose@lemmy.world 3 points 3 days ago* (last edited 3 days ago)

The key is which model, and how.

For the really sparse MoEs, you might be better off trying ik_llama.cpp, especially if you are targeting a 'small' quant. But the dense Gemma models (as good as they are) are probably not the best choice for 8G RAM these days.

[–] SabinStargem@lemmy.today 1 points 3 days ago

If you just want an easy way to setup AI on Windows or Linux, KoboldCPP is my recommendation for your backend. It supports the GGUF format, which allows you to use both RAM and VRAM simultaneously. It won't be the fastest thing, but it is easy enough to setup, with a bundled GUI for prep and actual usage. Through the IP address it gives, you can hook the backend into a frontend of choice.

KoboldCPP

[–] humanspiral@lemmy.ca 1 points 2 days ago

why does “AI” need so much CPU RAM

It doesn't really, though CPU inference is possible/slow at 256+gb. The problem is that they are making HBM (AI) ram instead of ddr4/5.