this post was submitted on 08 Aug 2025
184 points (100.0% liked)

chapotraphouse

13968 readers
700 users here now

Banned? DM Wmill to appeal.

No anti-nautilism posts. See: Eco-fascism Primer

Slop posts go in c/slop. Don't post low-hanging fruit here.

founded 4 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] FortifiedAttack@hexbear.net 63 points 1 week ago* (last edited 1 week ago) (1 children)

This is a perfect demonstration of how LLMs work and why they do not think.

The base question here, that the model is most strongly statistically geared towards, is "How many Rs are in strawberry". You can see how the response in the screenshot works as the template for the correct answer to this question.

All it did was get the most likely response for the strawberry question (which is the closest, most confident match in structure to the blueberry question) , and then substitute specific tokens. This is essentially what it does with every response for any question. It uses the closest match from the data it is trained on, then substitutes individual terms, so it looks appropriate to the question.

Ultimately every answer will only ever be an approximation, but there will never be any certainty to its correctness.

[–] LeeeroooyJeeenkiiins@hexbear.net 9 points 1 week ago (1 children)

tbh that kinda sounds like it's "thinking" though, just that it's not very good at it at all

[–] Edie@hexbear.net 40 points 1 week ago (2 children)

That's the easiest way to describe it to people, but it isn't. It's just math doing this.

[–] FunkyStuff@hexbear.net 42 points 1 week ago (1 children)

The undefeated argument for explaining it to laypeople is to show just how "linear" the process for an LLM is compared to human thought. When you prompt the LLM, all it ever does is it takes your input, turns it into a sequence of mathematical objects, then it puts them through a really long chain of matrix multiplications that lands on an output that gets converted back into language. At no point does it have branches where it takes some time to introspect, consider, recall, or reflect on anything the way a human does when we receive a question. It's not thinking.

[–] volcel_olive_oil@hexbear.net 17 points 1 week ago (1 children)

I've taken to calling them "synths" because what is it doing that's fundamentally different from a 1980's CASIO? A simple input is returning a complex output? waow

[–] tripartitegraph@hexbear.net 11 points 1 week ago (1 children)

Honestly I think if the term “cybernetics” had won over “artificial intelligence” there’d be less of this obfuscation. But “AI” is more marketable, and of course that’s all that matters.

[–] Fossifoo@hexbear.net 3 points 1 week ago

Gippity, the technical term is gippity.

[–] LeeeroooyJeeenkiiins@hexbear.net 12 points 1 week ago* (last edited 1 week ago) (2 children)

i don't want to argue w/ people all day but it was a joke

Ultimately every answer will only ever be an approximation, but there will never be any certainty to its correctness.

sounds like pretty much any and all thinking to me, people don't "know" things, they think they know things. usually they're right, but memory is weird shit and doesn't always work properly and there are ten billion and one factors that can influence a person's recollection of some bit of information. i was like "woah the magic conch is just like me fr fr"

p.s. I do wanna argue though that while i don't think chatgpt thinks, I do think that consciousness is an emergent property and with enough things like chatgpt all jumbled together you might see something resembling consciousness or thought, at least in a way that if you really interrogate it closely enough you might not be able to meaningfully differentiate it from biological consciousness or thought (which if you really wanna argue could also be reduced to "it's just math" as well, just math that is way beyond the ability of people to determine. I mean if you had magical deterministic information of the position and interaction of every neuron and neurochemical and every related cellular process etc and could map out and understand it you could look at it and shrug and go "it's just math" too, j/s doggggggggg)

this is where I'd press a disable inbox reply button IF I HAD IT grill-broke

[–] Philosoraptor@hexbear.net 5 points 1 week ago* (last edited 1 week ago)

you really interrogate it closely enough you might not be able to meaningfully differentiate it from biological consciousness or thought (which if you really wanna argue could also be reduced to "it's just math" as well, just math that is way beyond the ability of people to determine

Here's one easy way to differentiate it: my brain is wet and runs on electrochemical processes powered by food. Is that a "significant" difference? That depends on what you think is worth tracking! Defining what counts as "functionally identical" requires you decide which features of a system are "functional" and which are "mere" cosmetic differences. That differentiation isn't given to us by nature, though, and already reflects a hefty series of evaluative judgements. By carefully defining our functions, we can call any two things "functionally identical." There's no right answer, which is both a strength and a limitation of this kind of functionalist framework. Both the AI boosters and the AI "impossibilists" miss this point: functional identity is perspectival, and encodes a bunch of evaluative assumptions about which differences do and don't matter. That's ok--all model building does that--but it's important not to confuse the map and the territory, or think we're identifying some kind of value-independent feature of the world when we attribute functional identity.

[–] plinky@hexbear.net 3 points 1 week ago (2 children)

you don't think in language and words tho

[–] Euergetes@hexbear.net 5 points 1 week ago (1 children)

you don't think in language and words tho

am i missing this being sarcastic in a way because people do think in language and words

[–] plinky@hexbear.net 5 points 1 week ago (1 children)

you have running inner monologue, sure, but when you solve something like how many b's in blackberry, do you honestly say you thinking in words about a problem?

you have concepts/ideas/pictures/words/signs/symbols wheezing by, that are not embodied in words until desired to. And until you engage in rechecking/reflecting, i don't think it's very likely this thinking is in language, more like you can interpret flashes of thoughts into words if you decide to dwell on them, but are not necessitated to do so, and i don't think ordinary engagement with imagination requires language. (could have swore i linked some article related to math/language/fmri, that shown ideas (math in that case) thinking is not exactly located in language areas of brain)

[–] Euergetes@hexbear.net 7 points 1 week ago (2 children)

look i'm not a linguist so i'm not going to make the proper argument here but the defining features of our type of human are the specific adaptations for language, how people behave is culturally defined and culture is understood and communicated through language.

frankly likening the experience of sensations to knowledge of them without language sounds very silly to me.

[–] plinky@hexbear.net 5 points 1 week ago (1 children)

reducing ideas to sensations is some sensualist reductivism (sensations is what we get from outside world from our sensory organs, thoughts is your brain stuff doing something), i can do math or imagine things without inner voice vocalizing it, unless language comprehension area of a brain is lowkey involved in this. Of course higher order thinking, reflections/comparisons start to slow down and you can start to employ language inside to hold an idea for some time more. (i am language of thought simp i guess)

Language is a medium of transmission of ideas (to another implied person), not medium of ideas itself, you can have an idea without language, you cannot have language without ideas, as it would be just bunch of non-sense (as in - not carrying any sense). (as an aside, social conformity can be transmitted by body language perfectly well).

[–] Euergetes@hexbear.net 2 points 1 week ago (3 children)

i'm not trying to do reductivism, -this is admittedly outside my expertise- i simply don't understand how you square this concept of ideas existing outside language when that's inexpressable without language.

as an aside, social conformity can be transmitted by body language perfectly well

i hope i didn't make it sound like verbal speech was the key here, the muscle and bone adaptions that make complex speech possible were accompanied by brain stuff. people with disabilities making some forms of language inaccessible still use language!

[–] ProfessorOwl_PhD@hexbear.net 4 points 1 week ago

i simply don't understand how you square this concept of ideas existing outside language when that's inexpressable without language.

I think the easiest way to conceptualise it is when you're trying to explain something but struggling to find the right words - the idea is there fully formed in your mind, but you still have to search for the language to express it to someone else.

[–] plinky@hexbear.net 4 points 1 week ago* (last edited 1 week ago) (5 children)

i mean that you have language of a brain (thoughts/shapes/associations/memories) that float inside brain in some patterns, like waves on a pond. You can decide to express them, to explain/translate them to another person or not. I'm saying what i believe, opinions of course differ, some people think language guides ideas.

(as an exercise, what is decision in language form, explicitly? while i can buy that thoughts and objects-symbols might be closely related, verbs and actions are very far removed, feely-wise, to me)

you might find this article at least interesting (it's not strictly language of thought supportive) https://neuroanthropology.net/2010/07/21/life-without-language/

as aside if something is not expressible by language, it doesn't mean it's not real (nor is it real because i like to think it exists tbh), not something as private (and as of yet, due to mri restrictions, immeasurable) as thoughts. *(as an absurd example, is electromagnetic wave real for ancient egyptian? i don't think a single word will match or concept of light fits with ancient egyptian ideas)

load more comments (5 replies)
[–] purpleworm@hexbear.net 2 points 1 week ago (12 children)

i simply don't understand how you square this concept of ideas existing outside language when that's inexpressable without language.

This is just question-begging. "Everything exists within language because you can't express anything without language." No, it's because you're putting something into the format of linguistic expression that language becomes a necessary element. That doesn't mean that the actual internal experience depends on language, because it clearly does not and I'm struggling to figure out how to explain this to you because you've just talked yourself into it with poorly-constructed syllogisms.

Language is helpful for encoding ideas into long-term memory, but that doesn't mean all ideas are fundamentally mediated by language, and even the communication of ideas is not fundamentally mediated by language (though most of it is for humans).

as an aside, social conformity can be transmitted by body language perfectly well

i hope i didn't make it sound like verbal speech was the key here, the muscle and bone adaptions that make complex speech possible were accompanied by brain stuff. people with disabilities making some forms of language inaccessible still use language!

Body language is not literally language, and would be more accurately described, if we're talking about conscious communication, as "body gestures". Gestures are not the same thing as signs, as in sign languages used by people with certain disabilities. Body language furthermore is often unconscious or fully involuntary, which I think we can agree makes it not even a gesture.

You have argued yourself into a position where you are asserting that 12-month-olds do not have an idea of what their primary caretakers look like in the absence of something to gesture to, but earthworms do have ideas because they exhibit body "language". Please just read even an introductory article on this topic before going around making assertions about it, because it's silly to just go off of vibes, or if you can't be bothered to, just have some epistemic humility.

load more comments (12 replies)
[–] Abracadaniel@hexbear.net 2 points 1 week ago

I think Helen Keller has some writings on her experience learning language as an adult(?) that strengthen your point.

[–] LeeeroooyJeeenkiiins@hexbear.net 5 points 1 week ago (2 children)

neither does the computer!!!

I think chatgpt is basically like a computer equivalent of figuring out language processing to an alright degree which is like p. cool and I guess enough to trick people into thinking the mechanical turk has an agenda but yeah still not thinking

[–] plinky@hexbear.net 5 points 1 week ago (1 children)

i guess my issue is that neural networks as they exist now can't emerge property, they are fitting to data to predict next word in the best way possible, or most probable in unknown sentence. It's not how anybody learns, not mice, not humans.

Something akin to experiments with free floating robot arms with bolted on computer vision seem like much more viable approach, but there the problem is they don't have right architecture to feed it into, at least i don't think they do, and even then it will probably will stall out for a time at animal level.

[–] LeeeroooyJeeenkiiins@hexbear.net 7 points 1 week ago* (last edited 1 week ago) (1 children)

my problem is at some point they're gonna smoosh chatgpt and that sort of stuff and other shit together and it might be approximating consciousness but nerds will be like "it's just math! soypoint-2 " and it'll make commander Data sad disgost n' they won't even care

[–] plinky@hexbear.net 3 points 1 week ago

well of course they could, flawless imitation of consciousness, after all, is the same as consciousness (aside from morality, which will be unknowable), just not here at the moment

[–] purpleworm@hexbear.net 4 points 1 week ago

A mechanical turk is a fake AI with a human behind it

[–] poster596@hexbear.net 56 points 1 week ago (2 children)

The entire economy is getting refocused onto building a robot that lies to you. yea cool-zone

[–] PorkrollPosadist@hexbear.net 32 points 1 week ago

Damn. Not even cable news anchors are safe from automation.

load more comments (1 replies)

For reference, the reason why this happens is because LLMs aren't "next word predictors", but rather "next token predictors". Each word is broken into tokens, probably 'blue' and 'berry' for this case. The LLM doesn't have any access to information below the token level, which means that it can't count letters directly, but it has to rely on the "proximity" of the tokens in it's training data. Because there's a lot on the Internet about letters and strawberries, it counts the r instead of the b in 'berry'. Chain of Thought (CoT) models like Deepseek-reasoner or ChatGPT-o3 feed their output back into themselves and are more likely to output the text 'b l u e b e r r y' which is the trick to doing this. The lack of sub-token information isn't a critical flaw and doesn't come up often in real world usecases, so there isn't much energy dedicated to fixing it.

[–] LangleyDominos@hexbear.net 42 points 1 week ago (1 children)

OpenAI got that sweet DoD gig and now they're just slapping a UI wrapper on GPT 3.5 and calling it GPT 5.

china's going to have an actual AI running in some nuclear fusion powered bunker solving climate change and destroying america while america burns up its rivers to power 27000 data centers, 40% of which are dedicated to grok's boobs

[–] decaptcha@hexbear.net 32 points 1 week ago

porky-happy

Well yes it's terrible and hallucinates, it's a real piece of shit actually, but you see of course this is precisely why we need to commit all of humanity's resources. To improve it! To allow it to spell a word!

[–] Dort_Owl@hexbear.net 24 points 1 week ago (2 children)

I wonder if I would convince dumb rich investors to buy bags of my poop as the next big innovation

[–] LaGG_3@hexbear.net 19 points 1 week ago (1 children)

THIS IS INVESTMENT ADVICE: go all in on owl pellet futures

[–] plinky@hexbear.net 14 points 1 week ago

Considering the expected cultural impact of the harry potter show for the general idea of owls as pets, again yea

[–] TheModerateTankie@hexbear.net 3 points 1 week ago

cap-think Can your poop replace my workers?

[–] Salem@hexbear.net 23 points 1 week ago (3 children)

AI has its utilities, but capitalists searching for new frontiers and trying to find a genie that can solve climate change, poverty, wealth inequality, and really all of humanity's problems - directly and indirectly caused by capitalism - is not going to happen.

[–] Self_Sealing_Stem_Bolt@hexbear.net 23 points 1 week ago (1 children)

Its not AI. Ai is a marketing term to get investors to throw money at them. Theres nothing intelligent happening here.

[–] Salem@hexbear.net 6 points 1 week ago (1 children)

I suppose using it as a shorthand is misleading since it lends credibility to its misnomer; do we just stick to calling it LLMs then?

[–] leftAF@hexbear.net 7 points 1 week ago

I just call them what they do. Text generator. Image denoiser. Having used every pre-LLM version of accelerated statistical analysis out there (anything meant to find patterns in data), it's always been machine learning outputs. AI was only ever a term I heard in video gaming, which still seems more appropriate.

[–] Sasuke@hexbear.net 8 points 1 week ago

capitalists are not trying to solve any of those problems, they're just looking for a magic machine that can replace workers

[–] Rom@hexbear.net 6 points 1 week ago

tbh I don't think capitalists give a shit about those problems

[–] Horse@lemmygrad.ml 14 points 1 week ago

every time i see the failures of the fancy predictive text machine i find myself asking "what exactly was wrong with expert systems?"
like, they actually work for what people need them for?

[–] Vampire@hexbear.net 12 points 1 week ago (1 children)

When is the screenshot from and which model?

[–] Philosoraptor@hexbear.net 28 points 1 week ago (1 children)

GPT-5, which just released yesterday and is "clearly generally intelligent" according to Altman.

[–] LangleyDominos@hexbear.net 25 points 1 week ago

Just for more context, Altman was posting images of the Deathstar and acting like he is the AI Oppenheimer right before release.

[–] D61@hexbear.net 9 points 1 week ago

AI general intelligence acheived, I've probably answered this same question with that answer at some point in my life. And I have some level of intelligence.

[–] AssortedBiscuits@hexbear.net 9 points 1 week ago* (last edited 1 week ago) (2 children)

I wanna see the results if you ask ChatGPT the same question a million times. What percentage of responses would actually get the correct number?

[–] purpleworm@hexbear.net 9 points 1 week ago (1 children)

I think that heavily depends on whether it gets the initial answer right, since it will use that as context

[–] Cysioland@lemmygrad.ml 4 points 1 week ago (1 children)

When you're calling it through an API then you can simply choose not to pass it any context

load more comments (1 replies)
load more comments (1 replies)
[–] infuziSporg@hexbear.net 6 points 1 week ago

3 Rs in strawberry

Strawberry and blueberry are both in "berry" category and are more closely associated with each other than any other fruit

B is to Blueberry the way R is to Strawberry

Therefore, blueberry has 3 Bs

load more comments
view more: next ›