this post was submitted on 13 Dec 2023
25 points (90.3% liked)

Futurology

1852 readers
87 users here now

founded 1 year ago
MODERATORS
top 3 comments
sorted by: hot top controversial new old
[–] MartianSands@sh.itjust.works 6 points 1 year ago

People keep talking about this work as if it offers some meaningful insight into how these systems behave, when the whole premise requires a fundamental misunderstanding of what GPT is actually doing.

The idea that you could present it with information but ask it not to use that information is absurd. At the end of the day it's a sophisticated word association engine. It doesn't have intent, it isn't capable of strategy.

This is like pointing out that a dog which has learned to shake hands is actually deceiving you, and it doesn't really mean any of the social things we use handshakes for (except it's worse, because the dog is actually capable of being sociable)

[–] kat_angstrom@lemmy.world 3 points 1 year ago

All LLM replies are hallucinations

[–] RewindAgain 1 points 1 year ago

As AI develops it will just get better and better at manipulating people. At some point I'm sure we'll have AI that is doing that to us in ways we don't even realize or see coming.