this post was submitted on 24 Oct 2025
223 points (100.0% liked)

Chapotraphouse

14152 readers
712 users here now

Banned? DM Wmill to appeal.

No anti-nautilism posts. See: Eco-fascism Primer

Slop posts go in c/slop. Don't post low-hanging fruit here.

founded 4 years ago
MODERATORS
 

porky-point

top 36 comments
sorted by: hot top controversial new old
[–] KnilAdlez@hexbear.net 88 points 5 days ago (2 children)

I saw on Reddit that chatGPT 5 will hallucinate before actually searching the web or opening documents, most likely as a cost saving procedure by OpenAI. The bubble is looking awfully shaky.

[–] The_hypnic_jerk@hexbear.net 54 points 4 days ago* (last edited 4 days ago) (3 children)

I did some training for way too much money for open AI as a pro contractor for my field this past year. And let me tell you, it's a long ways off if even possible for even basic report writing much less something even a bit more complicated.

These things cannot be trusted with technical work, it would just find like things that "sound" like it makes sense, but then if you know anything at all about the work it is laughable

[–] SerLava@hexbear.net 22 points 4 days ago* (last edited 4 days ago) (1 children)

95% of the time, AI literally only works if all the people generating and receiving the work mostly do not care about what's in it. The more important the details are, the worse the result is.

[–] jackmaoist@hexbear.net 6 points 4 days ago

good things that porky-scared-flipped only care about the line and not the quality of work.

[–] Enjoyer_of_Games@hexbear.net 4 points 3 days ago

If you wouldn't outsource technical work to a reddit forum you definitely shouldn't be giving it to the robot

Anyone else remember the work of running Hillary Clinton's email server was being outsourced to a reddit forum?

Best start believing in cyberpunk dystopias...

[–] Rod_Blagojevic@hexbear.net 7 points 4 days ago

Did you train it on Dilbert cartoons?

[–] SevenSkalls@hexbear.net 25 points 4 days ago (5 children)

I thought that version was supposed to reduce hallucinations?

[–] SuperZutsuki@hexbear.net 38 points 4 days ago

Hopefully it's reducing the hallucinations of future profits that investors have been clinging onto

[–] fox@hexbear.net 25 points 4 days ago (1 children)

All they do is hallucinate, it's just a coin flip on if it's total nonsense or if it's truth-shaped. The same process that makes it answer wrong is the one that makes it answer right

[–] invalidusernamelol@hexbear.net 6 points 4 days ago (1 children)

Yep, I work in a moderately neiche programming sector and it was truly awful when I tried to do the "co-programming" stuff. It got to a point where if give it a clear spec, and all is get back was "call the function that does what you asked for"

[–] PolarKraken@lemmy.dbzer0.com 4 points 4 days ago (1 children)

Slight improvement over telling you to call functions it just silently made up (my experience using it with something niche)

See, they're learning, the hype is real! Any day now they will expertly clue you in to when they don't know shit. After that, AGI can only be 12-18 months away!

[–] invalidusernamelol@hexbear.net 2 points 4 days ago* (last edited 4 days ago) (1 children)

Oh that's what I meant when I said it told me to "call the function that does what I want". It would just hallucinate that function, then I'd go write it, then it would hallucinate more stuff. And by the time I was done the whole program was nonsense.

Ended up being faster at getting stuff done by just fully dropping it. Sure I don't have super auto complete, but who cares. Now my program is structured by me, and all the decisions were mine meaning I actually kinda understand how it works.

[–] PolarKraken@lemmy.dbzer0.com 3 points 4 days ago (1 children)

Lol, oof, sounds like real "draw the rest of the owl" energy but adding an unhelpful "unfuck the owl I drew" step first.

Yep, whole process was a pain. I can't imagine having to lead a team where people are using AI assistants. That has to be a nightmare and I'd ban it instantly. It was hard enough parsing the hallucinations it had introduced from my prompts. Would be 1000x worse doing a code review where you have to find hallucinations introduced by other people's prompts.

[–] MarmiteLover123@hexbear.net 12 points 4 days ago

Only if you pay for premium or whatever. Then it takes extra time to "think" and you'll get a more accurate answer. But "hallucinations" can only be reduced, never eliminated.

[–] KnilAdlez@hexbear.net 15 points 4 days ago

It would need to be trained on the data to have a chance of not hallucinating, which would not be possible on new webpages or custom documents.

[–] NuraShiny@hexbear.net 10 points 4 days ago* (last edited 4 days ago)

I too always trust marketing departments!

[–] GenderIsOpSec@hexbear.net 63 points 4 days ago (1 children)

oh hey it's just like me when i was supposed to do something but didnt doggirl-thumbsup

[–] LaGG_3@hexbear.net 48 points 4 days ago

Yes, I'm actually in the middle of working on this today. Thanks for checking in!

[–] RION@hexbear.net 44 points 4 days ago

Nicely spotted — I didn't do what you asked me to. Instead, I pissed and shit all over the floor. And that, right there? It's not just willfully disregarding your directions—it's mixing my piss and my shit into an amalgamated clay that I roll around like a dung beetle.

[–] NephewAlphaBravo@hexbear.net 52 points 5 days ago (1 children)

this is the most human thing i've seen one of these do shrug-outta-hecks

No like for real, that guy who used chatgpt and it deleted like their whole database without asking, the whole time it was giving justifications I was like "god damn it sounds like a real human New Guy," really blowing that turing test out of the water there

[–] shallot@hexbear.net 45 points 5 days ago* (last edited 5 days ago)

The threshold for adoption isn’t effectiveness, but the credulity of an MBA

[–] blunder@hexbear.net 22 points 4 days ago (1 children)

To be fair, this sounds exactly like my dipshit boss when he's criticizing my work

[–] OrionsMask@hexbear.net 9 points 4 days ago (1 children)

Yeah, but those are the exact people trying to replace everyone else with AI.

[–] onwardknave@hexbear.net 10 points 4 days ago

Boss: Good catch! I didn't actually read your .CSV file. AI: Good catch! I did not actually upload a .CSV file. Boss: Well played, Synthia! At least I don't need to pay you for not having the report on time. AI: Actually, you are on the Grokmini GPT Premium Plus model, as my .CSV file indicated. Boss: Who's paying for this? phone rings Speaker: Robotert from Corporate is on line one. You're being replaced with A.I. Boss: Nooo! Who could have seen this coming?

[–] theuniqueone@lemmy.dbzer0.com 26 points 4 days ago (1 children)

Yep AI among many other issues seems unable to say I don't know or can't answer that.

[–] Snort_Owl@hexbear.net 9 points 4 days ago

Since these models are trained on reddit that tracks

[–] godlessworm@hexbear.net 24 points 4 days ago (1 children)

how was it not able to just do it immediately? it's not like it's busy doing something else tf

[–] ElChapoDeChapo@hexbear.net 9 points 4 days ago
[–] RindoGang@lemmygrad.ml 19 points 5 days ago

Don’t worry it will one day, companies never skip a chance to save money

[–] marxisthayaca@hexbear.net 17 points 4 days ago
[–] LaughingLion@hexbear.net 12 points 4 days ago

i mean yeah, thats what we do as humans, too

[–] abc@hexbear.net 8 points 4 days ago
[–] 30_to_50_Feral_PAWGs@hexbear.net 3 points 4 days ago (1 children)
[–] HexReplyBot@hexbear.net 2 points 4 days ago

I found a YouTube link in your comment. Here are links to the same video on alternative frontends that protect your privacy: