this post was submitted on 14 Feb 2024
1288 points (98.3% liked)

Programmer Humor

19623 readers
1 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] doctorcrimson@lemmy.world 40 points 9 months ago* (last edited 9 months ago) (2 children)

I've seen like 5 posts about "AI BF/GF" today and it never ceases to surprise me how fucking easy it is to dupe people with these products, like holy shit humanity is fucked.

I'm always waiting for another ethical disaster trend to end but everybody is always in line for Mr Bonez Wild Ride.

[–] FractalsInfinite@sh.itjust.works 14 points 9 months ago (2 children)

If all you need is a one sided conversation designed to make you feel better, LLM's are great at concocting such "pep talks". For some, that just might be enough to male it believable. The Turing test was cracked years ago, only now do we have access to things that can do that for free*.

[–] butterflyattack@lemmy.world 2 points 9 months ago

A pretty early chatbot called Eliza simulated a non-directive psychotherapist. It kind of feels like they've improved hugely but not really changed much.

[–] doctorcrimson@lemmy.world -4 points 9 months ago (1 children)

Nah, bullshit, so far these LLM's are as likely to insult or radicalize you as comfort you. That won't ever be solved until AGI becomes commonplace, which won't be for a long ass time. These products are failures at launch.

[–] FractalsInfinite@sh.itjust.works 4 points 9 months ago* (last edited 9 months ago) (1 children)

... Have you tried any of the recent ones? As it stands chatGPT and Gemini are both built with guardrails strong enough to require custom inputs to jailbreak, with techniques such as Reinforcement learning from Human Feedback uses to lobotomize misconduct out of the AI's.

[–] doctorcrimson@lemmy.world -2 points 9 months ago (2 children)

HaVE yOu trIED iT bEfOrE? fOr SIMplE tAskS it SaVEs Me A lOt oF timE AT wOrK

JFC a skipping record plays right on queue whenever somebody speaks ill of the GPTs and LLMs.

[–] Krauerking@lemy.lol 2 points 9 months ago (1 children)

It's wild that people brag that it's able to do essentially the same as copying and pasting someone else's basic code but with only a few extra imagined errors sprinkled in for fun but that just makes it more useful for pretending you aren't again lljust literally copying someone else's stuff.

It's a search engine that makes up 1/8 of all it says. But sure it's super useful.

[–] doctorcrimson@lemmy.world 4 points 9 months ago

Sometimes it generates its own fake documentation when called out on a lie.

[–] FractalsInfinite@sh.itjust.works 2 points 9 months ago (1 children)

... Don't pull a strawman, all I said is that the AI's designed to approximate human written text, do a good job at approximating human text.

This means you can use them to simulate a reddit thread or make a fake wikipedia page, or construct a set of responses to someone who wants comfort.

Next time, read what someone actually says, and respond to that.

[–] doctorcrimson@lemmy.world -1 points 9 months ago* (last edited 9 months ago) (1 children)

Oh thanks, I really wanted to read another defence of an unethical product by some fanboy with no life. I'm so glad you managed to pick up on that based on my previous comments. I love it. You chose a great conversation to start here.

[–] FractalsInfinite@sh.itjust.works 0 points 9 months ago* (last edited 9 months ago) (1 children)

The tech is great at pretending to be human. It is simply a next "word" (or phrase) predictor. It is not good at answering obscure questions, writing code or making a logical argument. It is good at simulating someone.

It is my experience that it approximates a human well, but it doesn't get the details right (like truthness or reflecting objective reality), making it useless for essay writing, but great for stuff like character AI and other human simulations.

If you are right, give an actual Iogical response only capable by a human, as opposed to a generic ad hominem. I repeat my question, Have you actually used any of the GPT3 era models?

[–] doctorcrimson@lemmy.world 0 points 9 months ago (1 children)

They forgot to put in the quit when they built this one. You should be in the porn industry.

[–] FractalsInfinite@sh.itjust.works 1 points 9 months ago

Indeed, I don't think I can convince you at this point, so enjoy the touch of grass

[–] Gabu@lemmy.world 4 points 9 months ago (2 children)

Is it really duping, though? The way I see it, most of these people are perfectly aware of what they're doing.

[–] doctorcrimson@lemmy.world 1 points 9 months ago (1 children)

It's a very nuanced situation, but the people being sold these products and buying them are expecting a sentient robot lover. They're getting another shitty chatbot that inevitably fails to meet bare minimum companionship standards such as not berating you.

There currently exists no ethical use of LLM AI. Your comment can be construed as defence of malicious people and actions.

[–] Cethin@lemmy.zip 2 points 9 months ago (1 children)

I've never met anyone who uses them, but I'm also not sure people actually think it's sentient. I'm sure some do, but I'd assume the vast majority are just looking to have a conversation, and they don't care if it's with a person or a (pretty good) chat bot.

Also, there is a way to use it ethically. As the post mentions, run it locally and know what you're doing with it. I don't see any issues if you're aware of what it is, just as I don't see any issue using auto-correct or any other technology. We don't need to go full Butlerian (yet).

[–] Krauerking@lemy.lol 2 points 9 months ago

Really? You don't think anyone that uses these don't think they are sentient?

Sure it's not like the people designing these are prone to make-believing the AI for sentient too, right?

You are coming at this from your perspective which knows them to not be real. That's not gonna be how the average moron thinks and there is more of them than you think. And they absolutely believe their is a tiny sentient brain somewhere in there that is alive. I'm all for people doing what makes them happy but also this is a loneliness confirming hole to get trapped in and absolutely opens doors to influence people through their imaginary friends that they think they can trust.