this post was submitted on 28 Jul 2025
10 points (75.0% liked)
Futurology
3101 readers
38 users here now
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Say it with me now, people!: "GPTs are not mindful!"
LLMs (everything out there called "AI" these days) are predicting words based on statistical analysis of human-made texts. There is nothing "smart" about LLMs, they are only perceived like that because the training data contains a fair share of intelligent works from humans!
So, it doesn't have "feelings" or "motivations". It inherits the "traits" from human works that relate to the context given. If I prompt something like "Tell me a short story of one paragraph backwards in leetspeak", it's going to generate some sentences with these words being the context. It will sift through its model, prioritizing works it has trained on that has the highest relevance/occurance/probability of these words, mix it up with a bit of entropy, and voilá! You'll (likely) get a fair amount of gibberish, because this is a very small-to-non-existent part of its model, as very few actually use reverse leetspeak, i.e. it makes very few appearances in the training data. (I haven't managed to get any model to write this flawlessly yet.)
Watch this if you want a bite-sized explanation of how a GPT works: https://youtu.be/LPZh9BOjkQs