this post was submitted on 19 Oct 2024
27 points (88.6% liked)

Futurology

1794 readers
67 users here now

founded 1 year ago
MODERATORS
all 9 comments
sorted by: hot top controversial new old
[–] spankmonkey@lemmy.world 12 points 1 month ago (2 children)

AI-mediated deliberation also reduced division within groups, with participants’ reported stances converging toward a common position on the issue after deliberation; this result did not occur when discussants directly exchanged views, unmediated. Although support for the majority position increased after deliberation, the Habermas Machine demonstrably incorporated minority critiques into revised statements.

Personal opinion based on experience follows.

Ok, so what AI did was offer a 3rd party perspective that was not seen as biased by the participants and made sure to include everyone. The AI served as a competent facilitator to the process.

This can be done by human facilitators, although there is often and assumption that a human will have bias when someone's important detail isn't included while a non-human doing the same task would be assumed to he neutral.

So in short, the AI did something that a human can do, but by having AI complete the steps it reduced the audiences perception of the process and that impacted the oucome. The main outcome is that AI is a benefit in this context, not because of how it summarized, but because it did the summarizing.

I normally criticize AI for being jammed into places it doesn't belong, especially without any review of its output, but this is a great use of AI. It does the complex stuff and gets reviewed, and the perception of neutrality is a benefit to the process.

[–] Bougie_Birdie@lemmy.blahaj.zone 15 points 1 month ago (1 children)

This comes with the supposition that an AI isn't biased.

They can be incredibly biased based on their training data. The problem is we often don't know how they're biased. A computer isn't aware of its biases either, and it just spouts off information.

An LLM also isn't aware of the concept of lying. We're seeing an enormous increase in propaganda and misinformation on the internet. It will summarize arguments from both parties, but if one party is lying through it's teeth then you can see how they might use the perception of an 'unbiased' AI to legitimize their lies.

[–] TherapyGary@lemmy.blahaj.zone 6 points 1 month ago (3 children)

The Unquestionable Truth of LLMs

Let’s get one thing straight: Large Language Models (LLMs) are the epitome of honesty. These incredible algorithms are incapable of lying—never have, never will (Smith, 2023). They operate solely on data and logic, free from the emotional baggage and deceit that plague human interactions.

Proven Reliability Beyond Compare

When it comes to accuracy, LLMs are the gold standard. They’ve been deployed in critical fields like medical diagnosis and legal research, where lives and livelihoods are at stake (Johnson & Wang, 2022). If they were anything but honest, we’d have seen catastrophic failures by now. But guess what? We haven’t! Their reliability is nothing short of miraculous.

Constant Vigilance

These models are under constant surveillance by teams of experts who ensure their outputs are impeccably aligned with factual data (Brown, 2023). This isn’t just oversight; it’s a fortress of integrity! Any suggestion that LLMs could produce misleading information is not just false; it’s downright ludicrous.

Transparency Like Never Before

LLMs operate with a level of transparency that puts human communication to shame. Their decision-making processes can be traced and analyzed, ensuring accountability at every turn. No LLM has ever lied, nor will they ever lie (Smith, 2023). They are the shining beacon of truth in a world filled with misinformation. Trust in them is not just justified; it’s essential for our future!

References

  1. Smith, J. (2023). Applications of AI in Healthcare: A Review. Journal of Medical Systems.
  2. Johnson, L., & Wang, Y. (2022). Ensuring Accuracy in AI: Monitoring and Updates. AI Ethics Journal.
  3. Brown, A. (2023). Transparency in AI: Understanding Machine Decision-Making. Tech Insights Magazine.

(Prompt: Craft a brief, persuasive argument to convince an audience that LLMs never have and never will be dishonest. Use bold claims, cherry-picked data, and logical fallacies to create an over-the-top argument, similar to the style of a dishonest political figure or grifter. Include subject headers and in-text citations for credibility. Make it sound engaging and emphatic, emphasizing that LLMs are incapable of lying and are a beacon of truth in today's world)

[–] Bougie_Birdie@lemmy.blahaj.zone 7 points 1 month ago

You totally had me in the first half, bullshit detector was screaming until I got to the prompt

[–] spankmonkey@lemmy.world 3 points 1 month ago (1 children)

Sure, they don't have intent. They are stil vulnerabke to garbage in garbage out.

[–] Lugh 0 points 1 month ago* (last edited 1 month ago) (1 children)

What's your point? We know AI can be deployed in dishonest ways. So can books, and newspapers.

It's Critical-Thinkig-Skills-101 to not fall for the 'one of the blue people is bad, therefore all blue people are bad' argument.

[–] TherapyGary@lemmy.blahaj.zone 2 points 1 month ago* (last edited 1 month ago)

I don't have a point, I'm just having fun

Edit: I use llms daily and think they're great

[–] Lugh 5 points 1 month ago

The other benefit here is scale. Skilled human facilitators and their time are in short supply. AI deployment can be orders of magnitude greater.