this post was submitted on 05 Sep 2025
184 points (100.0% liked)

Fuck AI

3970 readers
846 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 2 years ago
MODERATORS
top 18 comments
sorted by: hot top controversial new old
[–] Treczoks@lemmy.world 4 points 1 day ago

Some people don't trust those buggers without even actively using them...

[–] Voroxpete@sh.itjust.works 34 points 2 days ago (1 children)

Fun fact, this is also true of crypto and NFTs. The more people know about them, the more skeptical they become.

Almost like they're all scams.

[–] Tartas1995@discuss.tchncs.de 15 points 2 days ago (1 children)

This is shockingly true.

I am working in it and a coworker who is a windows server admin (please take a moment to pray for his soul) is, according to him, making money with crypto. As I got fairly disappointed by the reality of crypto but was fairly excited over it before, I reject crypto because I looked into it.

So he is a bit of fan and I am the opposite, and I decided to talk with him about his perspective on my issues with crypto. Before we even got there, I had to realise that he is unfamiliar with the basics of crypto.

[–] Luffy879@lemmy.ml 2 points 2 days ago

I wonder if this has anything to do with him being a windows sysadmin

[–] Strawberry@lemmy.blahaj.zone 6 points 2 days ago* (last edited 2 days ago) (1 children)

One guard blocks your path. He sometimes lies and sometimes tells the truth. Do you hand him the keys to your billion dollar enterprise?

[–] msage@programming.dev 7 points 2 days ago

He always lies. He has no concept of truth or fact. He is a parrot, who has been listening to humans for milenia. But he doesn't understand anything. Just heard every word in many spoken sentences.

But he isn't anything more.

[–] halcyoncmdr@lemmy.world 41 points 2 days ago* (last edited 2 days ago) (2 children)

As soon as hallucinations were an option, it proved it could never be trusted as anything other than a toy. That takes literally no knowledge about it other than the fact it can tell you lies. Anyone that thinks otherwise is clearly an idiot.

[–] chaogomu@lemmy.world 9 points 2 days ago (1 children)

Generative AI actually does have a few real uses. Most notable is in the generation of new protein sequences.

Not long ago, you had PhD's whose entire career was understanding a single protein sequence. Now we can generate thousands of properly folded proteins. Millions. Stuff nature never thought of.

Due to patent law, the biotech revolution is still a few years out, but it's coming.

You want an enzyme that breaks apart plastic? We can design one now and have yeast producing it within a day or two.

And there are millions more that we can now play with.

Anyway, there are a few more niche uses for generative AI. But then idiot CEOs decided to shove that shit into everything. To decidedly mixed success.

[–] Luffy879@lemmy.ml 2 points 2 days ago

What you are actually talking about is Machine learning/genAI.

You have to make that clear, since most people only know ChatGPT as AI, and thus think that people are using ChatGPT for such things

[–] shalafi@lemmy.world 2 points 2 days ago

LLMs are generally OK, if you can craft an unbiased question that demands facts. Never seen ChatGPT get it wrong. But it's stunning how easily you can manipulate them just a couple of prompts deep.

Thing is, most people, in America anyway, didn't get the science training I got in 70s elementary school, and even though I'm barely above average IQ, I was a star science student. Imagine those people who don't understand empiricism using LLMs. The mind boggles.

[–] Thorry@feddit.org 31 points 2 days ago* (last edited 2 days ago) (2 children)

I recently read a cool book and wanted to know what other people thought about it. I had no idea how to find out, probably obscure forums or something. But with search engines being shit these days, I could only find one line reviews. I was looking for something a little more in depth.

So I thought hey let's try some kind of LLM based solution, this is something it should be able to do right? So I told Chatgpt hey I read this book and I liked it, what are some common praises and criticisms of that book? And the "AI" faithfully did as told. A pretty good summery of pros and cons, with everything being explained properly without becoming too verbose. Some of the points I agreed with, others less so. Wow, that's pretty neat.

But then alarm bells started ringing in my head. Time for a sanity check. So in a new chat I posed the exact same question, word for word. However I replaced the name of the book and the name of the author with something completely made up. Real sounding for the context, not obviously fake, but weird enough a human would give pause. And of course, not similar to anything that actually exists. The damn thing proceeded to give a very similar result as before. Different points, but the same format and gist. In depth points about pacing and predictability of a book I made the fuck up just seconds earlier.

I almost fell into the trap thinking LLMs could be useful in some cases. But in fact they are bullshit generators that just happen to be right some of the time.

[–] Hazzard@lemmy.zip 5 points 2 days ago* (last edited 2 days ago)

The way I imagine it in my head is like a text autocomplete trying to carry on a story about a person talking to a brilliant AI.

If something is real, of course the hypothetical author would try to get those details correct, so as not to break the illusion for educated readers. But if something is fake (or the LLM just doesn't know about it), well of course the all knowing fictional AI it's emulating would know about it. This is a fictional story, whatever your character is asking about it is probably just part of the setting. It wouldn't make sense for the all knowing AI in this story to just not know.

Obviously, OpenAI or whoever would try to prompt their LLMs to believe they're not in a fictional setting, but the LLMs are trained on as much fiction as non-fiction, and fiction doesn't usually break to tell you it's fiction, but often does the opposite. And even in non fiction there aren't many examples of people saying they don't know things. I wouldn't write a book review just to say I haven't heard of the book. Not to mention the non-fiction examples of people confidently being wrong or flat out lying.

Simply based on the nature of human writing, I frankly wouldn't ever expect LLMs to be immune to writing fiction. I expect that it's fundamental to the technology, and "hallucinations" (a metaphor that gives far too much credit, IMO) and jailbreaks won't ever be fully stamped out.

[–] leftytighty@slrpnk.net 3 points 2 days ago

the only time they're useful is when assisted by an algorithmic search that provides good contextual information for it to summarize and more importantly link to for verification..

if you're struggling to find good results online it will absolutely not be helpful, if you're struggling to read results then it might help you hone in on an area and save you time.

however, chances are you'll continue to get worse at independent information gathering

[–] Kyrgizion@lemmy.world 14 points 2 days ago

The more you know about it and how it really works, the more you understand it's bullshit.

[–] Vanilla_PuddinFudge@infosec.pub 5 points 2 days ago* (last edited 2 days ago)

Yeah, you just figure out what it's doing and eye-roll a bit.

I had that moment.

Oh, in no way is this sentience, Ai is just a Google search with extra steps... Google would say less, but, I mean, it really depends.

[–] cm0002@piefed.world 2 points 3 days ago (2 children)

Whaaaat? You mean to tell me that as a person learns a new tool they become more and more aware of its downsides‽

Crazy man lmao

[–] Tartas1995@discuss.tchncs.de 7 points 2 days ago

Less downsides more that the upsides don't really exist.

[–] shalafi@lemmy.world 2 points 2 days ago

Kind of a non-story, isn't it?