559
Running AI is so expensive that Amazon will probably charge you to use Alexa in future, says outgoing exec
(www.businessinsider.com)
This is a most excellent place for technology news and articles.
Oh wait, I think I misunderstood. I thought you had local language models running on your computer. I have seen that be discussed before with varying results.
Last time I tried running my own model was in the early days of the Llama release and ran it on an RTX 3060. The speed of delivery was much slower than OpenAI's API and the material was way off.
It doesn't have to be perfect, but I'd like to do my own API calls from a remote device phoning home instead of OpenAI's servers. Using my own documents as a reference would be a plus to, just to keep my info private and still accessible by the LLM.
Didn't know about Elevenlabs. Checking them out soon.
Edit because writing is hard.
That could be fun! I’ve made and trained my own models prior but I find that getting the right amount of data (in terms of both size and diversity to ensure features are orthogonal out of the gate) can be pretty tough.
If you don’t get that right balance of size and diversity in your data, that efficacy upper limit is gonna be way lower than you’d like, but you might have some good data sets laying around I got no clue ^_^
Lemmy know how it goes!