this post was submitted on 01 Feb 2024
28 points (91.2% liked)

Futurology

1812 readers
213 users here now

founded 1 year ago
MODERATORS
top 4 comments
sorted by: hot top controversial new old
[–] A_A@lemmy.world 5 points 9 months ago* (last edited 9 months ago)

souce has this title now :

Mistral CEO confirms ‘leak’ of new open source AI model nearing GPT-4 performance


2 excerpts :

Mistral co-founder and CEO Arthur Mensch took to X to clarify: “An over-enthusiastic employee of one of our early access customers leaked a quantised (and watermarked) version of an old model we trained and distributed quite openly…
To quickly start working with a few selected customers, we retrained this model from Llama 2 the minute we got access to our entire cluster — the pretraining finished on the day of Mistral 7B release. We’ve made good progress since — stay tuned!“

Quantization in ML ((machine learning)) refers to a technique used to make it possible to run certain AI models on less powerful computers and chips by replacing specific long numeric sequences in a model’s architecture with shorter ones.

[–] LesserAbe@lemmy.world 3 points 9 months ago (2 children)

How would someone go about running these things locally?

[–] paddirn@lemmy.world 4 points 9 months ago

LM Studio seems like the easiest option at this point.

[–] GBU_28@lemm.ee 2 points 9 months ago

Llama.cpp based on hardware.

DL from huggingface and run a command