this post was submitted on 24 Aug 2025
127 points (99.2% liked)

Fuck AI

3837 readers
1107 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
 

I really can't understand this LLM hype (note, I think models used for finding cures to diseases and other sciences are a good thing. I'm referring to the general populace LLM hype).

It's not interesting. To me, computers were so cool and interesting because of what you can do yourself, with just the hardware and learning code. It's awesome. What I don't find interesting in any way is typing a prompt. "But bro, prompt engineer!" that is about the stupidest fucking thing I've ever heard.

How anyone thinks its anything beyond a parlor trick baffles me. Plus, you're literally just playing with a toy made by billionaires to fuck the planet and the rest of us over even more.

And yes, to a point I realize "coding" is similar to "prompting" the computers hardware...if that was even an argument someone would try to make. I think we can agree it's nowhere near the same thing.

I would like to see if there is a correlation between TikTok addicts and LLM believers. I could guarantee it's probably very high.

you are viewing a single comment's thread
view the rest of the comments
[–] Rhaedas@fedia.io 3 points 4 days ago (1 children)

I think the math and science behind the inner works are interesting. The fact that you can feed in stuff and get things that make sense (not meaning they're accurate, just that they are usually grammatically good). If you don't find that sort of thing interesting then sure, the rest is absolutely crazy, but not really unexpected for humans who anthropomorphize everything.

[–] ZDL@lazysoci.al 2 points 3 days ago (1 children)

Internal consistency is also usually considered a good thing. Any individual sentence an LLMbecile generates is usually grammatically correct and internally consistent (though I have caught sentences whose endings have contradicted the beginning here and there), but as soon as you reach a second sentence the odds of finding a direct contradiction mount.

LLMbeciles are just not very good for anything.

[–] Rhaedas@fedia.io 2 points 3 days ago (1 children)

Some models are better than others at holding context. They all wander at some point if you push them though. Ironically, the newer versions that have a "thinking mode" are worse because of this, the context gets stretched out and they start second guessing even correct answers.

[–] ZDL@lazysoci.al 2 points 3 days ago

Indeed. The reasoning models can get incredibly funny to watch. I had one (DeepSeek) spinning around for over 850 seconds only to have it come up with the wrong answer to a simple maths question.