this post was submitted on 12 Jun 2025
335 points (96.9% liked)

Fuck AI

3106 readers
681 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] WanderingThoughts@europe.pub 7 points 2 days ago* (last edited 2 days ago) (1 children)

They got very good results with just making the model bigger and train it on more data. It started doing stuff that was not programmed in the thing at all, like writing songs and having conversations, the sort of thing nobody expected an autocomplete to do. The reasoning was that if they keep making it bigger and feed it even more days, that the line would keep going up. The the fanboys believed it, investors believed it and many business leaders believed it. Until they ran out of data and datacenters.

[–] lime@feddit.nu 3 points 1 day ago (2 children)

it's such a weird stretch, honestly. songs and conversations are not different to predictive text, it's just more of it. expecting it to do logic after ingesting more text is like expecting a chicken to lay kinder eggs just because you feed it more.

[–] WanderingThoughts@europe.pub 2 points 1 day ago

It helped that this advanced autocorrect could get high scores on many exams at university level. That might also mean the exams don't test logic and reasoning as well as the teachers think they do.

[–] kogasa@programming.dev 3 points 1 day ago* (last edited 1 day ago)

Not necessarily do logic, but mimic it, like it can mimic coherent writing and basic conversation despite only being a statistical token muncher. The hope is that there's sufficient information in the syntax to model the semantics, in which case a sufficiently complex and well-trained model of the syntax is also an effective model of the semantics. This apparently holds up well for general language tasks, meaning "what we mean" is well-modeled by "how we say it." It's plausible, at face value, that rigorous argumentation is also a good candidate, which would give language models some way of mimicking logic by talking through a problem. It's just not very good in practice right now. Maybe a better language model could do better, maybe not for a reasonable cost.