this post was submitted on 01 Feb 2024
28 points (91.2% liked)
Futurology
1812 readers
225 users here now
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
How would someone go about running these things locally?
LM Studio seems like the easiest option at this point.
Llama.cpp based on hardware.
DL from huggingface and run a command