this post was submitted on 14 Jan 2025
613 points (99.4% liked)

People Twitter

5521 readers
909 users here now

People tweeting stuff. We allow tweets from anyone.

RULES:

  1. Mark NSFW content.
  2. No doxxing people.
  3. Must be a pic of the tweet or similar. No direct links to the tweet.
  4. No bullying or international politcs
  5. Be excellent to each other.
  6. Provide an archived link to the tweet (or similar) being shown if it's a major figure or a politician.

founded 2 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] vga@sopuli.xyz 32 points 6 days ago (3 children)

It's an tragic fact that correcting a lie is hugely more costly than making and spreading the lie.

[–] AI_toothbrush@lemmy.zip 5 points 6 days ago

That basically sums up all of far right talking points

[–] vga@sopuli.xyz 3 points 6 days ago

But not quite as tragic as me using the wrong article in a sentence this profound.

[–] taladar@sh.itjust.works 2 points 6 days ago

Doesn't help that a significant number of modern media and platforms is optimized for content lengths that allow one but not the other (headlines, sound bites, micro-blogging, short-form videos).

[–] ICastFist@programming.dev 15 points 6 days ago

You know how cheaters ruin online games? How their fun relies on fucking everyone, abusing the system and winning without respecting the rules? The same applies to people like her, trump, musk, etc. They're "communication cheaters", they don't respect the rules, they abuse the system, yet platforms want us to believe that they deserve to be treated like common users.

"Waah waah mah 🧊 🍑 waah waah censorship" - Fuck you. If free speech was a game, these people would be banned.

Why would known diarrhea fetishist Mila Joy say blatant lies on the internet?

[–] Suavevillain@lemmy.world 6 points 6 days ago* (last edited 6 days ago)

AI is going to make misinformation a 100 times worse. There really needs to be laws with bot usage on social media and data privacy laws.

[–] lepinkainen@lemmy.world 8 points 6 days ago (3 children)

This is something AI would be good for

Have it search for specific misinformation and reply with a canned response with sources using an official account.

[–] frog_brawler@lemmy.world 15 points 6 days ago (4 children)

Or someone can just train an AI to spit out more misinformation.

[–] Honytawk@lemmy.zip 5 points 6 days ago (1 children)

And so, the bot wars started

We're years in at this point.

Dead Internet Theory and so.

[–] frayedpickles@lemmy.cafe 3 points 6 days ago

Or someone could train a bot to exaggerate misinformation from known liars

Community note: In fact, Oregon sent over 420 fire trucks but due to the rampage of the woke chupacabra and the cannibals of the northern California mountains, only 69 made it as far south as Sacramento. At that point California officials realized that Oregon actually sent minivans full of weed and spent the weekend hotboxing, using the Jewish space laser that started the fires on the "defrost" setting to heat up the marijuana.

[–] lepinkainen@lemmy.world 2 points 6 days ago

Then it’s going to be AI vs AI

In this case the misinformation AI will be run by Musk’s sugar daddy Putin so it won’t get banned

[–] jatone@lemmy.dbzer0.com 2 points 6 days ago (1 children)

dont need AI for this. its why we have systems like webs of trust and peer verification. standard crypto + some social communication solves a lot of these problems.

[–] lepinkainen@lemmy.world 1 points 6 days ago (1 children)

How can a web of trust prevent Joe-bob from claiming bullshit like in the OP?

[–] jatone@lemmy.dbzer0.com 1 points 5 days ago* (last edited 5 days ago)

first question you'd need to answer is how joe-bob got into the web of trust in the first place. in order for them to abuse their position they needed to be added to your web.

literally what bluesky is doing with their filters.

if you want to automate it.... run statistical analysis on the user for how many people ban them. you eventually find the people who are heavy but accurate with the ban button and can use them as inputs. and can use them as early gauges for new accounts. combine that with account age and you'll eventually get a fairly robust and automated platform for banning misinformation accounts.

[–] ubergeek@lemmy.today 1 points 6 days ago (1 children)

Have you seen some of the "factual" information AI states definitively?

[–] lepinkainen@lemmy.world 1 points 6 days ago

Generic LLMs are shit, they try to do everything

[–] SkunkWorkz@lemmy.world 4 points 6 days ago

Any one who believes it should heed her advice and get out.

load more comments
view more: next ›