submitted 1 week ago by Lugh to c/futurology
top 1 comments
sorted by: hot top controversial new old
[-] Lugh 2 points 1 week ago* (last edited 1 week ago)

No one seems much nearer to fixing LLM's problems with hallucinations and errors. A recent DeepMind attempt to tackle the problem, called SAFE, merely gets AI to be more careful in checking facts via external sources. No one seems to have any solution to the problem of giving AI logic and reasoning abilities. Even if Microsoft builds its $100 billion Stargate LLM-AI, will it be of much use without this?

The likelihood is AGI will come via a different route.

So many people are building robots, that the idea these researchers talk about - embodied cognition - will be widely tested. But it may be just as likely the path to AGI is something else, as yet undiscovered.

this post was submitted on 01 Apr 2024
24 points (96.2% liked)


1378 readers
32 users here now

founded 8 months ago