Thanks! Ill go check it out.
TheMightyCat
My target model is Qwen/Qwen3-235B-A22B-FP8. Ideally its maxium context lenght of 131K but i'm willing to compromise. I find it hard to give an concrete t/s awnser, let's put it around 50. At max load probably around 8 concurrent users, but these situations will be rare enough that oprimizing for single user is probably more worth it.
My current setup is already: Xeon w7-3465X 128gb DDR5 2x 4090
It gets nice enough peformance loading 32B models completely in vram, but i am skeptical that a simillar system can run a 671B at higher speeds then a snails space, i currently run vLLM because it has higher peformance with tensor parrelism then lama.cpp but i shall check out ik_lama.cpp.
While I would still say it's excessive to respond with "😑" i was too quick in waving these issues away.
Another commenter explained that residential power physically does not suppply enough to match high end gpus is why even for selfhosters they could be worth it.
Thanks, While I still would like to know thr peformance scaling of a cheap cluster this does awnser the question, pay way more for high end cards like the H200 for greater efficiency, or pay less and have to deal with these issues.
- I know the more bandwidth the better, but i wonder how does it scale. I can only test my own setup which is less then optimal for this purpose with pcie 4.0 x16 and no p2p, but it goes as follows: a single 4090 gets 40.9 t/s while 2 get 58.5 t/s using tensor parrelism tested on Qwen/Qwen3-8B-FP8 with vLLM. I am really curious how this scales over more then 2 pcie 5.0 cards with p2p, which all cards here listed except the 5090 support.
- The theory goes that yes while the H200 has a very impressive bandwith of 4.89 TB/s, but for the same price you can get 37 TB/s spread across 58 RX 9070s, but if this actually works in practice i don't know.
- I don't need to build a datacenter, i'm fine with building a rack myself in my garage. And i don't think that requires higher volumes than just purchasing at different retailers
- I intend to run at fp8 so i wanted to show that instead of fp16 but its surprisingly difficult to find the numbers for that, only the H200 datasheet, cleary displays
FP8 Tensor Core
, the RTX pro 6000 datasheet keeps it vague with only mentioningAI TOPS
, which they define asEffective FP4 TOPS with sparsity
, and they didn't even bother writing a datasheet for he 5090 only saying3352 AI TOPS
, which i suppose is fp4 then. the AMD datasheets only list fp16 and int8 matrix, whether int8 matrix is equal to fp8 i don't know. So FP16 was the common denominator for all the cards i could find without comparing apples with oranges.
Well a scam for selfhosters, for datacenters it's different ofcourse.
Im looking to upgrade to my first dedicated built server coming from only SBCs so I'm not sure how much of a concern heat will be, but space and power shouldn't be an issue. (Within reason ofcourse)
Necessity is the mother of innovation, that is why the Chinese do have domestic manufacturing of processors and the EU doesn't.
What it will take in my opinion is American processors becoming unviablly expensive (tarrifs) or unavailable alltogether (export controls) for the will/market to arise for EU domestic processors.
As long as Russia is fighting China gets cheap oil
Why do core counts and memory type matter when the table includes memory bandwith and tflop16?
The H200 has HBM and alot of tensor cores which is reflected in its high stats in the table and the amd gpus don't have cuda cores.
I know a major deterioration is to be expected but how major? Even in extreme cases with only 10% efficiency of the total power then its still competitive against the H200 since you can get way more for the price, even if you can only use 10% of that.