this post was submitted on 05 Sep 2024
561 points (96.2% liked)

Technology

59578 readers
3233 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Social media platforms like Twitter and Reddit are increasingly infested with bots and fake accounts, leading to significant manipulation of public discourse. These bots don't just annoy users—they skew visibility through vote manipulation. Fake accounts and automated scripts systematically downvote posts opposing certain viewpoints, distorting the content that surfaces and amplifying specific agendas.

Before coming to Lemmy, I was systematically downvoted by bots on Reddit for completely normal comments that were relatively neutral and not controversial​ at all. Seemed to be no pattern in it... One time I commented that my favorite game was WoW, down voted -15 for no apparent reason.

For example, a bot on Twitter using an API call to GPT-4o ran out of funding and started posting their prompts and system information publicly.

https://www.dailydot.com/debug/chatgpt-bot-x-russian-campaign-meme/

Example shown here

Bots like these are probably in the tens or hundreds of thousands. They did a huge ban wave of bots on Reddit, and some major top level subreddits were quiet for days because of it. Unbelievable...

How do we even fix this issue or prevent it from affecting Lemmy??

(page 4) 50 comments
sorted by: hot top controversial new old
[–] explodicle@sh.itjust.works 3 points 2 months ago (1 children)

Not a full solution, but... can you block users by wildcard? IMHO everyone who has ".eth" or ".btc" as their user name is not worth listening to. Being a crypto bro doesn't mean you need to change your user name... unless you intend to scam people.

I'll revise my opinion if rappers ever make crypto names cool.

load more comments (1 replies)
[–] beefbot@lemmy.blahaj.zone 3 points 2 months ago

Isn’t there code / the magic incantation of prompt text that we can deploy to get bots to reveal themselves? Even if it take more than one response?

[–] Metz@lemmy.world 2 points 2 months ago (6 children)

Long before cryptocurrencies existed, proof-of-work was already being used to hinder bots. For every post, vote, etc., a cryptographic task has to be solved by the device used for it. Imperceptibly fast for the normal user, but for a bot trying to perform hundreds or thousands of actions in a row, a really annoying speed bump.

See e.g. https://wikipedia.org/wiki/Hashcash

This combined with more classic blockades such as CAPTCHAs (especially image recognition, which is still expensive in mass despite the advances in AI) should at least represent a first major obstacle.

load more comments (6 replies)
[–] chiliedogg@lemmy.world 2 points 2 months ago (2 children)

To help fight bot disinformation, I think there needs to be an international treaty that requires all AI models/bots to disclose themselves as AI when prompted using a set keyphrase in every language, and that API access to the model be contingent on paying regain tests of the phrase (to keep bad actors from simply filtering out that phrase in their requests to the API).

It wouldn't stop the nation-state level bad actors, but it would help prevent people without access to their own private LLMs from being able to use them as effectively for disinformation.

load more comments (2 replies)
load more comments
view more: ‹ prev next ›