this post was submitted on 15 Mar 2025
592 points (97.9% liked)

Technology

66687 readers
4085 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] Feathercrown@lemmy.world 6 points 51 minutes ago

Sorry, but the AI is just as "biased" as its training data is. You cannot have something with a consistent representation of reality that they would consider unbiased.

[–] phoenixz@lemmy.ca 5 points 1 hour ago

That's what they've been trying to do, just not in the way you want it

[–] NotLemming@lemm.ee 91 points 1 day ago (1 children)

He means they must insert ideological bias on his behalf.

[–] rottingleaf@lemmy.world -3 points 2 hours ago (1 children)

Not necessarily, they train models on real world data, often of what people believe to be true, not what works, and those models are not yet able to perform experiments, register results and learn from them (what even a child does, even a dumb one), and real world is cruel, bigotry is not even the worst part of it, neither are anti-scientific beliefs. But unlike these models, the real world has more entropy.

If you've seen Babylon V, the philosophy difference between Vorlons and Shadows was somewhere near this.

One can say in philosophy blockchain is a Vorlon technology and LLMs are a Shadow technology (it's funny, because technically it would be the other way around, one is kinda grassroots and the other is done by few groups with humongous amounts of data and computing resources), but ultimately they are both attempts to compensate what they see as wrong in the real world. Introducing new wrongs in their blind zones.

(In some sense the reversal of alignment of Vorlons and Shadows, between philosophy and implementation, is right - you hide in technical traits of your tooling that which you can't keep in your philosophy ; so "you'll think what we tell you to think" works for Vorlons (or Democrats), but Republicans have to hide that inside tooling and mechanisms they prefer, while "power makes power" is something Democrats can't just say, but can hide inside tooling they prefer or at least don't fight too much. That's why cryptocurrencies' popularity came in one side's ideological dominance time, and "AIs" in the others'. Maybe this is a word salad.)

So, what I meant, - the degeneracy of such tools is the bias in his favor, there's no need for anything else.

[–] RememberTheApollo_@lemmy.world 1 points 2 hours ago

I can’t believe you worked a B5 ref into a discussion, much less operational differences between Vorlon and Shadow.

Major difference even in the analogy is that Shadows actively and destructively sought control and withheld info whereas Vorlons manipulated by parceling out cryptic messages.

Anyway, yeah… the internet is completely fucked up and full of stupidity, malice, and casual cruelty. Many of us filter it by simply avoiding it by chance (it’s not what we look for) or actively filter it (blocking communities, sites, media, etc.), so we don’t see the shitholes of the internet and the hordes of trolls and wingnuts that are the denizens of these spaces.

Removing filters from LLMs and training them on shitholes will have the expected result.

[–] SabinStargem@lemmings.world 43 points 1 day ago* (last edited 1 day ago) (3 children)

While I do prefer absolute free speech for individuals, I have no illusions about what Trump is saying behind closed doors: "Make it like me, and everything that I do." I don't want an government to decide for me and others what is right.

Also, science, at least the peer reviewed stuff, should be considered free of bias. Real world mechanics, be it physics or biology, can't be considered biased. We need science, because it makes life better. A false science, such as phrenology or RFK's la-la-land ravings, needs to be discarded because it doesn't help anyone. Not even the believers.

[–] BlackSheep@lemmy.ca 2 points 7 hours ago* (last edited 7 hours ago) (2 children)

Science is not about government, or right and left, or free speech. It’s just science. It’s about individuals spending their lives studying a specific subject. Politicians who know nothing about those subjects should have no say. I shudder to think what might have happened during the polio outbreak under today's U.S politicians.

Edit: In support of your comment.

[–] NotLemming@lemm.ee 1 points 23 minutes ago

I'd say science is about finding truth by rejecting untruths.

A fundamental question is whether there is such a thing as objective truth. I'd argue yes. Magas would probably say no (at least I know one who gave that answer). To them there's only your version of reality vs theirs.

That's why they invent and choose to believe untruths, because they believe they can invent truth rather than find it.

[–] Zacryon@feddit.org 2 points 4 hours ago

Politicians who know nothing about those subjects should have no say.

Some ethical guidelines are very important though. We usually don't want to conduct potentially deadly experiments on humans for example.

[–] splinter@lemm.ee 12 points 17 hours ago (1 children)

Reality has a liberal bias.

[–] AnonomousWolf@lemm.ee 7 points 8 hours ago

Historically liberals have always been right and eventually won.

Got rid of slavery. Got women's rights. Got Gay rights. Etc.

load more comments (1 replies)
[–] gabbath@lemmy.world 25 points 1 day ago (3 children)

Le Chat by Mistral is a France-based (and EU abiding) alternative to ChatGPT. Works fine for me so far.

[–] uuldika@lemmy.ml 3 points 7 hours ago (1 children)

I'm switching to DeepSeek-R1, personally. locally hosted, so I won't be affected when the US bans it. plus I can remove the CCP's political sensitivity filters.

it feels weird for me to be rooting for PRC to pull ahead of the US on AI, but the idea of Trump and Musk getting their hands on a potential superintelligence down the line is terrifying.

[–] gabbath@lemmy.world 3 points 7 hours ago (1 children)

I get where you're coming from. I'm no fan of China and they're definitely fascist in my book, but if I had to choose between China and this America, then definitely China. The reason being that a successful fascist America will add even more suffering to the world than there already is. Still, I would prefer an option from a democratic country succeeds — although if we're talking strictly local use of Chinese (or even US) tech, I don't really see how that helps the country itself. To the high seas, as they say.

[–] rottingleaf@lemmy.world 1 points 2 hours ago (1 children)

but if I had to choose between China and this America, then definitely China.

Suppose they are equally powerful, which one would you choose then?

[–] gabbath@lemmy.world 1 points 1 hour ago

I suppose it wouldn't matter at that point? I'm not sure what you mean exactly. There's a lot of instability in America right now as it tries to become fully fascist, and I think the world (to any Americans reading this — this includes you too!) has to decide whether they're fine with it or not, which will in turn affect its success in becoming fully fascist. Anything done to make it harder for the transformation to complete could turn the tide, since they're more vulnerable while things are in motion. Once it's done and that becomes the norm, it's going to become much more difficult.

[–] Darkmoon_UK@lemm.ee 2 points 8 hours ago* (last edited 1 hour ago) (1 children)

I've been an enthusiastic adopter of Generative AI in my coding work; and know that Claude 3.7 is the greatest coding model out there right now (at least for my niche).

That said, at some point you have to choose principles over convenience; so I've cancelled all my US Tech service accounts - now exclusively using 'Le Chat Pro' (+ sometimes local LLM's).

Honestly, it's not quite as good, but it's not half bad either, and it is very very fast thanks to some nifty hardware acceleration that the others lack.

I still get my work done, and sleep better at night.

The more subscriptions Mistral get, the more they're able to compete with the US offerings.

Anyone can do this.

[–] gabbath@lemmy.world 1 points 7 hours ago (1 children)

The more subscriptions Mistral get, the more they're able to compete with the US offerings.

That's true. I'm still on free. How much for the Pro?

[–] Darkmoon_UK@lemm.ee 2 points 7 hours ago* (last edited 7 hours ago)

$14 USD/mo... Ironically

[–] SabinStargem@lemmings.world 6 points 1 day ago* (last edited 1 day ago) (1 children)

Personally, I find that for (local AI), the recently released 111b Command-A is pretty good. It actually grasps the concepts of the dice odds that I set up for a D&D-esque JRPG style. Still too slow on mere gamer hardware (DDR4 128gb + RX 4090) to be practical, but still an impressive improvement.

Sadly, Cohere is located in the US. On the other paw, they operate in California and New York from my brief check. This is good, that means it less likely for them to obey Trump's stupidity.

load more comments (1 replies)
[–] nargis@lemmy.dbzer0.com 46 points 1 day ago* (last edited 1 day ago) (4 children)

eliminates mention of “AI safety”

AI datasets tend to have a white bias. White people are over-represented in photographs, for instance. If one trains AI to with such datasets in something like facial recognition( with mostly white faces), it will be less likely to identify non-white people as human. Combine this with self-driving cars and you have a recipe for disaster; since AI is bad at detecting non-white people, it is less likely to prevent them from being crushed underneath in an accident. This both stupid and evil. You cannot always account for any unconscious bias in datasets.

“reducing ideological bias, to enable human flourishing and economic competitiveness.”

They will fill it with capitalist Red Scare propaganda.

The new agreement removes mention of developing tools “for authenticating content and tracking its provenance” as well as “labeling synthetic content,” signaling less interest in tracking misinformation and deep fakes.

Interesting.

“The AI future is not going to be won by hand-wringing about safety,” Vance told attendees from around the world.

That was done before. A chatbot named Tay was released into the wilds of twitter in 2016 without much 'hand-wringing about safety'. It turned into a neo-Nazi, which, I suppose is just what Edolf Musk wants.

The researcher who warned that the change in focus could make AI more unfair and unsafe also alleges that many AI researchers have cozied up to Republicans and their backers in an effort to still have a seat at the table when it comes to discussing AI safety. “I hope they start realizing that these people and their corporate backers are face-eating leopards who only care about power,” the researcher says.

[–] rottingleaf@lemmy.world 3 points 2 hours ago

They will fill it with capitalist Red Scare propaganda.

I feel as if "capitalist" vs "Red" has long stopped being a relevant conflict in the real world.

[–] captainlezbian@lemmy.world 6 points 19 hours ago

Yeah but the current administration wants Tay to be the press secretary

[–] MuskyMelon@lemmy.world 7 points 1 day ago (2 children)

capitalist Red Scare propaganda

I've always found it interesting that the US is preoccupied with fighting communism propaganda but not pro-Fascist propaganda.

[–] ivanafterall@lemmy.world 4 points 10 hours ago* (last edited 10 hours ago)

It wasn't always that way.

tl;dr: 1946 Department of Defense film called "Don't Be a Sucker" that "dramatizes the destructive effects of racial and religious prejudice" and the dangers of fascism pretty blatantly.

[–] kkj@lemmy.dbzer0.com 2 points 13 hours ago (1 children)

Communism threatens capital. Fascism mostly does not.

[–] MuskyMelon@lemmy.world 3 points 12 hours ago

So it's never been about democracy after all.

load more comments (1 replies)
[–] termaxima@jlai.lu 215 points 1 day ago* (last edited 1 day ago) (4 children)

Literally 1984.

This is a textbook example of newspeak / doublethink, exactly how they use the word “corruption” to mean different things based on who it’s being applied to.

load more comments (4 replies)
[–] PeteZa@lemm.ee 5 points 1 day ago

AI is not your friend.

[–] HawlSera@lemm.ee 34 points 1 day ago* (last edited 1 day ago) (2 children)

Trump doing this shit reminds me of when the Germans demanded all research on physics, relativity, and thankfully the atomic bomb, stop because they were "Jewish Pseudoscience" in Hitler's eyes

[–] Ledericas@lemm.ee 10 points 1 day ago (1 children)

trump also complimented thier nazis recently, how he wish he had his "generals"

load more comments (1 replies)
load more comments (1 replies)

Well the rest of the world can take the lead in scientific r&d now that the US has not only declared itself failed culturally but politically and are attacking scientific institutions and funding directly (NIH, universities, etc).

[–] floofloof@lemmy.ca 112 points 1 day ago (7 children)

So, models may only be trained on sufficiently bigoted data sets?

[–] BrianTheeBiscuiteer@lemmy.world 78 points 1 day ago (10 children)

This is why Musk wants to buy OpenAI. He wants biased answers, skewed towards capitalism and authoritarianism, presented as being "scientifically unbiased". I had a long convo with ChatGPT about rules to limit CEO pay. If Musk had his way I'm sure the model would insist, "This is a very atypical and harmful line of thinking. Limiting CEO pay limits their potential and by extension the earnings of the company. No earnings means no employees."

load more comments (10 replies)
load more comments (6 replies)
[–] nonentity@sh.itjust.works 24 points 1 day ago (1 children)

Any meaningful suppression or removal of ideological bias is an ideological bias.

I propose a necessary precursor to the development of artificial intelligence is the discovery and identification of a natural instance.

load more comments (1 replies)
[–] Saint_La_Croix_Crosse@midwest.social 12 points 1 day ago (1 children)
[–] gabbath@lemmy.world 13 points 1 day ago

Yup, and always will be, because the antiwoke worldview is so delusional that it calls empirical reality "woke". Thus, an AI that responds truthfully will always be woke.

[–] flamingsjack@lemmy.ml 44 points 1 day ago (1 children)

I hope this backfires. Research shows there's a white & anti-blackness (and white-supremacist) bias in many AI models (see chatgpt's response to israeli vs palestinian questions).

An unbiased model would be much more pro-palestine and pro-blm

[–] singletona@lemmy.world 68 points 1 day ago (1 children)

'We don't want bias' is code for 'make it biased in favor of me.'

load more comments (1 replies)
load more comments
view more: next ›