this post was submitted on 10 May 2025
63 points (81.2% liked)

Memes

9924 readers
1152 users here now

Post memes here.

A meme is an idea, behavior, or style that spreads by means of imitation from person to person within a culture and often carries symbolic meaning representing a particular phenomenon or theme.

An Internet meme or meme, is a cultural item that is spread via the Internet, often through social media platforms. The name is by the concept of memes proposed by Richard Dawkins in 1972. Internet memes can take various forms, such as images, videos, GIFs, and various other viral sensations.


Laittakaa meemejä tänne.

founded 2 years ago
MODERATORS
 
top 22 comments
sorted by: hot top controversial new old
[–] umbrella@lemmy.ml 1 points 7 minutes ago

was the ai trained on reddit commenters? just asking.

[–] OpenStars@piefed.social 2 points 31 minutes ago

That's just what they want us to think! /s 😜

Wait a minute... oh no no no no no no, that is what they want ~~to sell us~~ us to think! (as they game the system and control the AI, no /s no cap!)

img

[–] amzd@lemmy.world 1 points 35 minutes ago

It’s weird to hold the belief that AI won’t oppress us while showing it that it’s fine to oppress animals as long as you’re smarter

[–] ArchmageAzor@lemmy.world 7 points 2 hours ago

I do think that the best government would be one run by AI.

I do not think the AIs we currently have could run a government, though.

[–] hanke@feddit.nu 31 points 4 hours ago (2 children)
  1. You can't have unbiased AI without unbiased training data.
  2. You can't have unbiased training data without unbiased humans.
  3. unbiased humans don't exist.
[–] jerkface@lemmy.ca 4 points 3 hours ago* (last edited 3 hours ago) (1 children)

show your work. 1 especially seems suspect. Especially since many AIs are not trained on content like you are imagining, but rather trains itself through experimentation and adversarial networks.

[–] RememberTheApollo_@lemmy.world 4 points 2 hours ago (1 children)

Even how it trains itself can be biased based on what its instructions are.

[–] jerkface@lemmy.ca 3 points 2 hours ago* (last edited 2 hours ago)

Yes, and? If you write a bad fitness function, you get an AI that doesn't do what you want. You're just saying, human-written software can have bugs.

[–] Zacryon@feddit.org 2 points 2 hours ago

If you want AI agents that benefit humanity, you need biased training data and or a bias inducing training process. E.g. an objective like "Improve humanity in an ethical manner" (don't pin me down on that, just a simple example).

For example, even choosing a real environment over a tailored simulated one is already a bias in training data, even though you want to deploy the AI agent in a real setting. That's what you want. Bias can be beneficial. Also if we think about ethical reasoning. An AI agent won't know what ethics are and which are commonly preferred, if you don't introduce such a bias.

[–] Skullgrid@lemmy.world 11 points 3 hours ago

Every sci fi work : oh no, the technology is bad

Reality : the assholes using the tool are making it do bad things

[–] gibmiser@lemmy.world 1 points 2 hours ago

https://en.m.wikipedia.org/wiki/The_Evitable_Conflict

If nearly perfect computers controlled government.

[–] ThePyroPython@lemmy.world 7 points 4 hours ago

Literally the plot of every sci-fi show with an "overseer".

[–] MelastSB@sh.itjust.works 6 points 4 hours ago (1 children)

Do you have time to talk about our Lord and Saviour Samaritan?

[–] hydrashok@sh.itjust.works 1 points 2 hours ago

Loved that show.

[–] Fuzzy_Dunlop@lemm.ee 2 points 4 hours ago (2 children)

Absolutely.

Every time I hear someone question the safety of self-driving cars, I know they've never been to Philadelphia or NJ.

[–] 9point6@lemmy.world 4 points 3 hours ago

I mean TBF, they don't trust the average person in New Jersey to handle a petrol pump—so much so that it's legally prohibited.

I'm not at all surprised that they shouldn't be trusted with the vehicle itself, given that

[–] Natanox@discuss.tchncs.de 10 points 4 hours ago (2 children)

I mean, the US really isn't a good example for road safety. Even Germany got better drivers, and we like to drive 140-200 kmh. It's a matter of good education, standards and regulations (as always).

In the end self-driving public transport is the way the future of mobility should primarily be imho. Self-driving cars… as long as there always is a steering wheel in case of unexpected circumstances or to move around backyards and stuff it'll probably me fine. Just don't throw technical solutions at cultural problems and expect them to be fixed.

[–] Zacryon@feddit.org 3 points 2 hours ago

I mean, the US really isn't a good example for road safety. Even Germany got better drivers, and we like to drive 140-200 kmh. It's a matter of good education, standards and regulations (as always).

I didn't want to believe it as well, but it seems to be factually correct, as per this wonderful Wikipedia list.

[–] BuboScandiacus@mander.xyz 4 points 3 hours ago (1 children)

Ah yes the good regulations on german public roads

[–] boonhet@lemm.ee 4 points 2 hours ago

They're so well regulated that they can safely drive on roads with no speed limit, whereas the US for example has pretty low limits and multiple times the fatal crashes (proportionally to population)

[–] DavidGarcia@feddit.nl -2 points 4 hours ago (1 children)

AI judges make a lot of sense, that way everyone is treated equally, because eveey judge thinks literally the same way. No corrupt judges, no change in political bias between judges, no more lenient or strict judges that arbitrary decide your fate. How you decide what AI model is your judge is a whole new can of worms, but it definitely has lots of upsides.

[–] qaz@lemmy.world 4 points 3 hours ago

Perhaps when we have real AGI, but I wouldn't want an LLM to decide someone's fate.