this post was submitted on 31 Oct 2024
1 points (100.0% liked)

r/Apple: Unofficial Apple Community

57 readers
1 users here now

We stand in solidarity with numerous people who need access to the API including bot developers, people with accessibility needs (r/blind) and 3rd...

founded 2 years ago
MODERATORS
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/apple by /u/kloolegend on 2024-10-30 22:26:58+00:00.


Announcement says it can run up to 200b parameters. Meanwhile my M1 Max 64GB (maxed out chip) is struggling with 70B Llama models. How did apple benchmark this? Are they lowering the quantization to run larger models? 

Good source to check out:

no comments (yet)
sorted by: hot top controversial new old
there doesn't seem to be anything here