ssebastianoo

joined 1 year ago
[–] ssebastianoo@programming.dev 1 points 4 months ago

wtf does that mean

[–] ssebastianoo@programming.dev 1 points 4 months ago* (last edited 4 months ago)

llama3:8b, I know it's "far from ideal" but only really specific use cases require more advanced models to run locally, if you do software development, graphic design or video editing 8gb is enough

edit: just tried it after some time and it works better than I remembered showcase

[–] ssebastianoo@programming.dev 1 points 4 months ago (2 children)

here you are

vscode + photoshop + illustrator + discord + arc + chrome + screen recording and still no lag

[–] ssebastianoo@programming.dev -2 points 5 months ago (6 children)

I have a macbook air m2 with 8gb of ram and I can even run ollama, never had ram problems, I don't get all the hate

[–] ssebastianoo@programming.dev 5 points 1 year ago* (last edited 1 year ago)

factfulness