nagaram

joined 2 years ago
[–] nagaram@startrek.website 4 points 1 week ago (1 children)

I think I'm going to have a harder time fitting a threadripper in my 10 inch rack than I am getting any GPU in there.

[–] nagaram@startrek.website 2 points 1 week ago

I do already have a NAS. It's in another box in my office.

I was considering replacing the PIs with a BOD and passing that through to one of my boxes via USB and virtualizing something. I compromised by putting 2tb Sata SSDs in each box to use for database stuff and then backing that up to the spinning rust in the other room.

How do I do that? Good question. I take suggestions.

[–] nagaram@startrek.website 5 points 1 week ago (2 children)

With a RTX 3060 12gb, I have been perfectly happy with the quality and speed of the responses. It's much slower than my 5060ti which I think is the sweet spot for text based LLM tasks. A larger context window provided by more vram or a web based AI is cool and useful, but I haven't found the need to do that yet in my use case.

As you may have guessed, I can't fit a 3060 in this rack. That's in a different server that houses my NAS. I have done AI on my 2018 Epyc server CPU and its just not usable. Even with 109gb of ram, not usable. Even clustered, I wouldn't try running anything on these machines. They are for docker containers and minecraft servers. Jeff Geerling probably has a video on trying to run an AI on a bunch of Raspberry Pis. I just saw his video using Ryzen AI Strix boards and that was ass compared to my 3060.

But to my use case, I am just asking AI to generate simple scripts based on manuals I feed it or some sort of writing task. I either get it to take my notes on a topic and make an outline that makes sense and I fill it in or I feed it finished writings and ask for grammatical or tone fixes. Thats fucking it and it boggles my mind that anyone is doing anything more intensive then that. I am not training anything and 12gb VRAM is plenty if I wanna feed like 10-100 pages of context. Would it be better with a 4090? Probably, but for my uses I haven't noticed a difference in quality between my local LLM and the web based stuff.

[–] nagaram@startrek.website 10 points 1 week ago (1 children)

That's fair and justified. I have the label maker right now in my hands. I can fix this at any moment and yet I choose not to.

I'm man feeding orphans to the orphan crushing machine. I can stop this at any moment.

[–] nagaram@startrek.website 7 points 1 week ago

Oh and my home office set up uses Tiny in One monitors so I configured these by plugging them into my monitor which was sick.

I'm a huge fan of this all in one idea that is upgradable.

[–] nagaram@startrek.website 12 points 1 week ago

These are M715q Thinkcentres with a Ryzen Pro 5 2400GE

[–] nagaram@startrek.website 5 points 1 week ago

Not really a lot of thought went into rack choice. I wanted something smaller and more powerful than my several optiplexs I had.

I also decided I didn't want storage to happen here anymore because I am stupid and only knew how to pass through disks for Truenas. So I had 4 truenas servers on my network and I hated it.

This was just what I wanted at a price I was good with at Like $120. There's a 3D printable version but I wasn't interested in that. I do want to 3D print racks and I want to make my own custom ones for the Pis to save space.

But this set up is way cheaper if you have a printer and some patience.

[–] nagaram@startrek.website 11 points 1 week ago (2 children)

Not much. As much as I like LLMs, I don't trust them for more than rubber duck duty.

Eventually I want to have a Copilot at Home set up where I can feed a notes database and whatever manuals and books I've read so it can draw from that when I ask it questions.

The problem is my best GPU is my gaming GPU a 5060ti and its in a Bazzite gaming PC so its hard to get the AI out of it because of Bazzite's "No I won't let you break your computer" philosophy, which is why I did it. And my second best GPU is a 3060 12GB which is really good, but if I made a dedicated AI server, I'd want it to be better than my current server.

[–] nagaram@startrek.website 1 points 1 week ago

Rough. It was a great handle. Fortunately he still pops up when you search sexuallobster

Gone but not forgotten!

[–] nagaram@startrek.website 1 points 1 week ago

I just built a mini rack with 3 Thinkcentre tiny PCs I bought for $175 (USD) on eBay. All work great.

[–] nagaram@startrek.website 6 points 1 week ago (3 children)

SINCE WHEN DID SEXUAL LOBSTER CHANGE HIS NAME TO GREASY TALES

[–] nagaram@startrek.website 13 points 1 week ago

Alternative

looks inside The absolute least educated read you've ever seen

view more: ‹ prev next ›