this post was submitted on 27 Jan 2025
135 points (93.0% liked)
Technology
61263 readers
3707 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
So I'm still on the fence about the AI arms race in general. However, reading up on DeepSeek it feels like they built a model specifically to work well on the benchmarks.
I say this cause it's a Mixture of Experts approach, so only parts of the model are used at any given point. The drawback is generalization.
Additionally, it isn't a multimodal model and the only place I've seen real opportunity for workflows automation is using the multimodal models. I guess you could use a combination of models, but that's definitely a step back from the grand promise of these foundational models.
Overall, I'm just not sure if this is lay people getting caught up in hype or actually a significant change in the landscape.
To be fair, I'm pretty sure that's what everyone is doing. If you're not measuring against something, there's no way to tell if you're doing anything at all.
My point was a mixture of Experts model could suffer from generalization. Although in reading more I'm not sure if it's the newer R model that had the MoE element.