this post was submitted on 25 Jul 2025
467 points (98.3% liked)

A Boring Dystopia

13243 readers
2016 users here now

Pictures, Videos, Articles showing just how boring it is to live in a dystopic society, or with signs of a dystopic society.

Rules (Subject to Change)

--Be a Decent Human Being

--Posting news articles: include the source name and exact title from article in your post title

--If a picture is just a screenshot of an article, link the article

--If a video's content isn't clear from title, write a short summary so people know what it's about.

--Posts must have something to do with the topic

--Zero tolerance for Racism/Sexism/Ableism/etc.

--No NSFW content

--Abide by the rules of lemmy.world

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] jj4211@lemmy.world 9 points 1 week ago (1 children)

LLMs don't just regurgitate training data, it's a blend of the material used in the training material. So even if you did somehow assure that every bit of content that was fed in was in and of itself completely objectively true and factual, an LLM is still going to blend it together in ways that would no longer be true and factual.

So either it's nothing but a parrot/search engine and only regurgitates input data or it's an LLM that can do the full manipulation of the representative content and it can provide incorrect responses from purely factual and truthful training fodder.

Of course we have "real" LLM, LLM is by definition real LLM, and I actually had no problem with things like LLM or GPT, as they were technical concepts with specific meaning that didn't have to imply. But the swell of marketing meant to emphasize the more vague 'AI', or the 'AGI' (AI, but you now, we mean it) and 'reasoning' and 'chain of thought'. Having real AGI or reasoning is something that can be discussed with uncertainty, but LLMs are real, whatever they are.

[–] survirtual@lemmy.world 3 points 1 week ago* (last edited 1 week ago) (1 children)

By real, I mean an LLM anchored in objective consensus reality. It should be able to interpolate between truths. Right now it interpolates between significant falsehoods with truths sprinkled in.

It won't be perfect but it can be a lot better than it is now, which is starting to border on useless for any type of serious engineering or science.

[–] jeeva@lemmy.world 0 points 1 week ago (1 children)

That's just... Not how they work.

Equally, from your other comment: a parameter for truthiness, you just can't tokenise that in a language model. One word can drastically change the meaning of a sentence.

LLMs are very good at one thing: making probable strings of tokens (where tokens are, roughly, words).

[–] survirtual@lemmy.world 1 points 1 week ago* (last edited 1 week ago)

Yeah, you can. The current architecture doesn't do this exactly, but what I am saying is a new method that includes truthiness is needed. The fact that LLMs predict probable tokens means it already includes a concept of this, because probabilities themselves are a measure of "truthiness."

Also, I am speaking in abstract. I don't care what they can and can't do. They need to have a concept of truthiness. Use your imagination and fill in the gaps to what that means.