Although I'm using AI more and more for writing related tasks, I still find it constantly making simple rudimentary errors of logic. If it is advancing as this research paper claims, why are we still seeing so many of these types of hallucination errors?
this post was submitted on 10 Oct 2024
6 points (87.5% liked)
Futurology
1800 readers
48 users here now
founded 1 year ago
MODERATORS
If it is advancing as this research paper claims, why are we still seeing so many of these types of hallucination errors?
I mean, the research could be true, while the AI is merely achieving a level of reasoning of a houseplant or a bug, while streaming randomized garbage the rest of the time.
That would still be a promising sign of progress, while not of any current practical use.
(Edit: which is guess is pretty often what science progress looks like.)