this post was submitted on 24 Jul 2024
7 points (81.8% liked)
Futurology
1805 readers
63 users here now
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It seems finding more data to scale up LLMs is a bottleneck too.
Yeah, that's part of why I think that. There's also just the alignment issue that no amount of training will fix. At the end of the day, an LLM is a very smart internet simulator, you treat it like something else at your peril, and training it to be something else is very much an open problem.