this post was submitted on 07 Oct 2024
7 points (100.0% liked)

Futurology

1800 readers
48 users here now

founded 1 year ago
MODERATORS
top 1 comments
sorted by: hot top controversial new old
[–] Lugh 4 points 1 month ago* (last edited 1 month ago)

The model family is "a new suite of state-of-the-art multimodal models trained solely with next-token prediction," BAAI writes. "By tokenizing images, text, and videos into a discrete space, we train a single transformer from scratch on a mixture of multimodal sequences".

Every single time it looks like closed Big Tech AI systems might steal a lead, open source is never far behind snapping at their heels. Now it seems it's the same story with multi-modal AI.