this post was submitted on 07 Feb 2025
26 points (100.0% liked)

Technology

38078 readers
354 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] SorteKanin@feddit.dk 9 points 2 weeks ago (2 children)

Am I the only one that feels it's a bit strange to have such safeguards in an AI model? I know most models aren't available online but some models are available to download and run locally right? So what prevents me from just doing that if I wanted to get around the safeguards? I guess maybe they're just doing it so that they can't be somehow held legally responsible for anything the AI model might say?

[–] theneverfox@pawb.social 11 points 2 weeks ago

The idea is they're marketable worker replacements

If you have a call center you want to switch to ai, it's easy though to make them pull up relevant info. It's harder to stop them from being misused

If your call center gets slammed for using racial slurs, that's an issue

Remember, they're trying to sell AI as drop in worker replacement

[–] dipshit@lemm.ee 2 points 1 week ago* (last edited 1 week ago)

I think a big part of it is just that many want control, they want to limit what we're capable of doing. They especially don't want us doing things that go against them and their will as companies. Which is why they try to block us from doing those things they dislike so much, like generating porn, or discussing violent content.

I noticed that certain prompts people used for the purpose of AI poisoning are now marked as against the terms of service on ChatGPT so the whole "control" thing doesn't seem so crazy.