this post was submitted on 14 Apr 2024
273 points (91.7% liked)

Futurology

1854 readers
49 users here now

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] FaceDeer@kbin.social 14 points 8 months ago (1 children)

Yeah, so many people are confidently stating "LLMs can't think like humans do!" When we're actually still pretty unclear on how humans think.

Sure, an LLM on its own may not be an AGI. But they're remarkably closer than we would have predicted they could get just a few years ago, and it may well be that we just need to add a bit more "special sauce" (memory, prompting strategies, perhaps a couple of parallel LLMs that specialize in different types of reasoning) to get them over the hump. At this point a lot of the research isn't going into simply "make it bigger!", it's going into "use LLMs smarter."