this post was submitted on 17 May 2024
496 points (94.8% liked)

Technology

59597 readers
2784 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] ClamDrinker@lemmy.world 35 points 6 months ago* (last edited 6 months ago) (4 children)

It will never be solved. Even the greatest hypothetical super intelligence is limited by what it can observe and process. Omniscience doesn't exist in the physical world. Humans hallucinate too - all the time. It's just that our approximations are usually correct, and then we don't call it a hallucination anymore. But realistically, the signals coming from our feet take longer to process than those from our eyes, so our brain has to predict information to create the experience. It's also why we don't notice our blinks, or why we don't see the blind spot our eyes have.

AI representing a more primitive version of our brains will hallucinate far more, especially because it cannot verify anything in the real world and is limited by the data it has been given, which it has to treat as ultimate truth. The mistake was trying to turn AI into a source of truth.

Hallucinations shouldn't be treated like a bug. They are a feature - just not one the big tech companies wanted.

When humans hallucinate on purpose (and not due to illness), we get imagination and dreams; fuel for fiction, but not for reality.

[–] GoodEye8@lemm.ee 12 points 6 months ago (1 children)

I think you're giving a glorified encyclopedia too much credit. The difference between us and "AI" is that we can approach knowledge from a problem solving position. We do approximate the laws of physics, but we don't blindly take our beliefs and run with it. We put we come up with a theory that then gets rigorously criticized, then come up with ways to test that theory, then be critical of the test results and eventually we come to consensus that based on our understandings that thing is true. We've built entire frameworks to reduce our "hallucinations". The reason we even know we have blind spots is because we're so critical of our own "hallucinations" that we end up deliberately looking for our blind spots.

But the "AI" doesn't do that. It can't do that. The "AI" can't solve problems, it can't be critical of itself or what information its giving out. All our current "AI" can do is word vomit itself into a reasonable answer. Sometimes the word vomit is factually correct, sometimes it's just nonsense.

You are right that theoretically hallucinations cannot be solved, but in practicality we ourselves have come up with solutions to minimize it. We could probably do something similar with "AI" but not when the AI is just a LLM that fumbles into sentences.

[–] ClamDrinker@lemmy.world 4 points 6 months ago* (last edited 6 months ago) (2 children)

I'm not sure where you think I'm giving it too much credit, because as far as I read it we already totally agree lol. You're right, methods exist to diminish the effect of hallucinations. That's what the scientific method is. Current AI has no physical body and can't run experiments to verify objective reality. It can't fact check itself other than be told by the humans training it what is correct (and humans are fallible), and even then if it has gaps in what it knows it will fill it up with something probable - but which is likely going to be bullshit.

All my point was, is that to truly fix it would be to basically create an omniscient being, which cannot exist in our physical world. It will always have to make some assumptions - just like we do.

[–] Eranziel@lemmy.world 4 points 6 months ago* (last edited 6 months ago)

The fundamental difference is that the AI doesn't know anything. It isn't capable of understanding, it doesn't learn in the same sense that humans learn. A LLM is a (complex!) digital machine that guesses the next most likely word based on essentially statistics, nothing more, nothing less.

It doesn't know what it's saying, nor does it understand the subject matter, or what a human is, or what a hallucination is or why it has them. They are fundamentally incapable of even perceiving the problem, because they do not perceive anything aside from text in and text out.

[–] GoodEye8@lemm.ee 2 points 6 months ago (1 children)

It doesn't need to verify reality, it needs to be internally consistent and it's not.

For example I was setting up logging pipeline and one of the filters didn't work. There was seemingly nothing wrong with configuration itself and after some more tests with dummy data I was able to get it working, but it still didn't work with the actual input data. So I have the working dummy example and the actual configuration to chatGPT and asked why the actual configuration doesn't work. After some prompts going over what I had already tried it ended up giving me the exact same configuration I had presented as the problem. Humans wouldn't (or at least shouldn't) make that error because it would be internally inconsistent, the problem statement can't be the solution.

But the AI doesn't have internal consistency because it doesn't really think. It's not making sure what it's saying is logical based on the information it knows, it's not trying to make assumptions to solve a problem, it can't even deduce that something true is actuality true. All it can do is predict what we would perceive as the answer.

[–] bastion@feddit.nl 1 points 6 months ago* (last edited 6 months ago)

Indeed. It doesn't even trend towards consistency.

It's much like the pattern-matching layer of human consciousness. Its function isn't to filter for truth, its function is to match knowns and potentials to patterns in its environment.

AI has no notion of critical thinking. It is purely positive "thinking", in a technical sense - it is positing based on what it "knows", but there is no genuine concept of self, nor even of critical thinking, nor even a non-conceptual logic or consistency filter.

[–] KillingTimeItself@lemmy.dbzer0.com 6 points 6 months ago (1 children)

ok so to give you an um ackshually here.

Technically if we were to develop a real general artificial general intelligence, it would be limited to the amount of knowledge that it has, but so is any given human. And it's advantage would still be scale of operations compared to a human, since it can realistically operate on all known theoretical and practical information, where as for a human that's simply not possible.

Though presumably, it would also be influenced by AI posting that we already have now, to some degree, the question is how it responds to that, and how well it can determine the difference between that and real human posting.

the reason why hallucinations are such a big problem currently is simply due to the fact that it's literally a predictive text model, it doesn't know anything. That simply wouldn't be true for a general artificial intelligence. Not that it couldn't hallucinate, but it wouldn't hallucinate to the same degree, and possibly with greater motives in mind.

A lot of the reason human biology tends to obfuscate certain things is simply due to the way it's evolved, as well as it's potential advantages in our life. The reason we can't see our blindspots is due to the fact that it would be much more difficult to process things otherwise. It's the same reason our eyesight is flipped as well. It's the same reason pain is interpreted the way that it is.

a big mistake you are making here is stating that it must be fed information that it knows to be true, this is not inherently true. You can train a model on all of the wrong things to do, as long it has the capability to understand this, it shouldn't be a problem.

For predictive models? This is probably the case, but you can also poison the well so to speak, when it comes to those even.

[–] ClamDrinker@lemmy.world 1 points 6 months ago* (last edited 6 months ago) (1 children)

Yes, a theoretical future AI that would be able to self-correct would eventually become more powerful than humans, especially if you could give it ways to run magnitudes more self-correcting mechanisms at the same time. But it would still be making ever so small assumptions when there is a gap in the information it has.

It could be humble enough to admit it doesn't know, but it can still be mistaken and think it has the right answer when it doesn't. It would feel neigh omniscient, but it would never truly be.

A roundtrip around the globe on glass fibre takes hundreds of milliseconds, so even if it has the truth on some matter, there's no guarantee that didn't change in the milliseconds it needed to become aware that the truth has changed. True omniscience simply cannot exists since information (and in turn the truth encoded by that information) also propagates at the speed of light.

a big mistake you are making here is stating that it must be fed information that it knows to be true, this is not inherently true. You can train a model on all of the wrong things to do, as long it has the capability to understand this, it shouldn’t be a problem.

The dataset that encodes all wrong things would be infinite in size, and constantly change. It can theoretically exist, but realistically it will never happen. And if it would be incomplete it has to make assumptions at some point based on the incomplete data it has, which would open it up to being wrong, which we would call a hallucination.

[–] KillingTimeItself@lemmy.dbzer0.com 3 points 6 months ago (1 children)

It could be humble enough to admit it doesn’t know, but it can still be mistaken and think it has the right answer when it doesn’t. It would feel neigh omniscient, but it would never truly be.

yeah and so are humans, so i mean, shit happens. Even then it'd likely be more accurate than a human just based off of the very fact that it knows more subjects than any given human. And all humans alive, because it's knowledge is based off of the written works of the entirety of humanity, theoretically.

A roundtrip around the globe on glass fibre takes hundreds of milliseconds, so even if it has the truth on some matter, there’s no guarantee that didn’t change in the milliseconds it needed to become aware that the truth has changed. True omniscience simply cannot exists since information (and in turn the truth encoded by that information) also propagates at the speed of light.

well yeah, if we're defining the ultimate truth as something that propagates through the universe at the highest known speed possible. That would be how that works, since it's likely a device of it's own accord, and or responsive to humans, it likely wouldn't matter, as it would just wait a few seconds anyway.

The dataset that encodes all wrong things would be infinite in size, and constantly change. It can theoretically exist, but realistically it will never happen. And if it would be incomplete it has to make assumptions at some point based on the incomplete data it has, which would open it up to being wrong, which we would call a hallucination.

at that scale yes, but at this scale, with our current LLM technology, which was what i was talking about specifically, it wouldn't matter. But even at that scale i don't think it would classify as a hallucination, because a hallucination is a very specific type of being wrong. It's literally pulling something out a thin air, and a theoretical general intelligence AI wouldn't be pulling shit out of thin air, at best it would elaborate on what it knows already, which might be everything, or nothing, depending on the topic. But it shouldn't just make something up out of thin air. It could very well be wrong about something, but that's not likely to be a hallucination.

[–] ClamDrinker@lemmy.world 1 points 6 months ago* (last edited 6 months ago)

Yes, it would be much better at mitigating it and beat all humans at truth accuracy in general. And truths which can be easily individually proven and/or remain unchanged forever can basically be 100% all the time. But not all truths are that straight forward though.

What I mentioned can't really be unlinked from the issue, if you want to solve it completely. Have you ever found out later on that something you told someone else as fact turned out not to be so? Essentially, you 'hallucinated' a truth that never existed, but you were just that confident it was correct to share and spread it. It's how we get myths, popular belief, and folklore.

For those other truths, we simply ascertain the truth to be that which has reached a likelihood we consider it to be certain. But ideas and concepts we have in our minds constantly float around on that scale. And since we cannot really avoid talking to other people (or intelligent agents) to ascertain certain truths, misinterpretations and lies can sneak in to cause us to treat as truth that which is not. To avoid that would mean the having to be pretty much everywhere to personally interpret the information straight from the source. But then things like how fast it can process those things comes in to play. Without making guesses about what's going to happen, you basically can't function in reality.

[–] HawlSera@lemm.ee 3 points 6 months ago* (last edited 6 months ago)

You assume the physical world is all there is or that the AI has any real intelligence at all. It's a damn chinese room.

[–] KeenFlame@feddit.nu -4 points 6 months ago (1 children)

Very long layman take. Why is there always so many of these on every ai post? What do you get from guesstimating how the technology works?

[–] ClamDrinker@lemmy.world 9 points 6 months ago (1 children)

I'm not an expert in AI, I will admit. But I'm not a layman either. We're all anonymous on here anyways. Why not leave a comment explaining what you disagree with?

[–] KeenFlame@feddit.nu -5 points 6 months ago (3 children)

I want to just understand why people get so passionate about explaining how things work, especially in this field where even the experts themselves just don't understand how it works? It's just an interesting phenomenon to me

[–] Fungah@lemmy.world 5 points 6 months ago (1 children)

The not understanding hlw it works thing isn't universal in ai from my understanding. And people understand how a lot of it works even then. There may be a few mysterious but its not sacrificing chickens to Jupiter either.

[–] KeenFlame@feddit.nu -1 points 6 months ago

Nope, it's actually not understood. Sorry to hear you don't understand that

[–] mriormro@lemmy.world 4 points 6 months ago (1 children)

What exactly are your bona fides that you get to play the part of the exasperated "expert" here? And, more importantly, why should I give a fuck?

I constantly hear this shit from other self-appointed experts in this field as if no one is allowed to discuss, criticize, or form opinions on the implications of this technology besides those few who 'truly understand'.

[–] KeenFlame@feddit.nu -1 points 6 months ago

Did you misread something? Nothing of what you said is relevant

[–] ClamDrinker@lemmy.world 1 points 6 months ago* (last edited 6 months ago) (1 children)

Hallucinations in AI are fairly well understood as far as I'm aware. Explained in high level on the Wikipedia page for it. And I'm honestly not making any objective assessment of the technology itself. I'm making a deduction based on the laws of nature and biological facts about real life neural networks. (I do say AI is driven by the data it's given, but that's something even a layman might know)

How to mitigate hallucinations is definitely something the experts are actively discussing and have limited success in doing so (and I certainly don't have an answer there either), but a true fix should be impossible.

I can't exactly say why I'm passionate about it. In part I want people to be informed about what AI is and is not, because knowledge about the technology allows us to make more informed decision about the place AI takes in our society. But I'm also passionate about human psychology and creativity, and what we can learn about ourselves from the quirks we see in these technologies.

[–] KeenFlame@feddit.nu 0 points 6 months ago

Not really, no, because these aren't biological, and the scientists that work with it is more interested in understanding why it works at all.

It is very interesting how the brain works, and our sensory processing is predictive in nature, but no, it's not relevant to machine learning which works completely different