this post was submitted on 05 Jun 2025
946 points (98.7% liked)
Not The Onion
16547 readers
1191 users here now
Welcome
We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!
The Rules
Posts must be:
- Links to news stories from...
- ...credible sources, with...
- ...their original headlines, that...
- ...would make people who see the headline think, “That has got to be a story from The Onion, America’s Finest News Source.”
Please also avoid duplicates.
Comments and post content must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, or otherwise disruptive behavior that makes this community less fun for everyone.
And that’s basically it!
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
LLM AI chatbots were never designed to give life advice. People have this false perception that these tools are like some kind of magical crystal ball that has all the right answers to everything, and they simple don't.
These models cannot think, they cannot reason. The best they could do is give you their best prediction as to what you want based on the data they've been trained on and the parameters they've been given. You can think of their results as "targeted randomness" which is why their results are close or sound convincing but are never quite right.
That's because these models were never designed to be used like this. They were meant to be used as a tool to aid creativity. They can help someone brainstorm ideas for projects or waste time as entertainment or explain simple concepts or analyze basic data, but that's about it. They should never be used for anything serious like medical, legal, or life advice.
This is what I keep trying to tell my brother. He's anti-AI, but to the point where he sees absolutely no value in it at all. Can't really blame him considering stories like this. But they are incredibly useful for brainstorming, and recently I've found chat gpt to be really good at helping me learn Spanish, because it's conversational. I can have conversations with it in Spanish where I don't feel embarrassed or weird about making mistakes, and it corrects me when I'm wrong. They have uses. Just not the uses people seem to think they have
AI is the opposite of crypto currency. Crypto is a solution looking for a problem, but AI is a solution for a lot of problems. It has relevance because people find it useful, there's demand for it. There's clearly value in these tools when they're used the way they're meant to be used, and they can be quite powerful. It's unfortunate how a lot of people are misinformed about these LLM work.
I will admit that, unlike crypto, AI is technically capable of being useful, but its uses are for problems we have created for ourselves.
-- "It can summarize large bodies of text."
What are you reading these large bodies of text for? We can encourage people to just... write less, you know.
-- "It's a brainstorming tool."
There are other brainstorming tools. Creatives have been doing this for decades.
-- "It's good for searching."
Google was good for searching until they sabotaged their own service. In fact, google was even better for searching before SEO began rotting it from within.
-- "It's a good conversationalist."
It is... not a real person. I unironically cannot think of anything sadder than this sentiment. What happened to our town squares? Why is there nowhere for you to go and hang out with real, flesh and blood people anymore?
-- "Well, it's good for learning languages."
Other people are good for learning languages. And, I'm not gonna lie, if you're too socially anxious to make mistakes in front of your language coach, I... kinda think that's some shit you gotta work out for yourself.
-- "It can do the work of 10 or 20 people, empowering the people who use it."
Well, the solution is in the text. Just have the 10 or 20 people do that work. They would, for now, do a better job anyway.
And, it's not actually true that we will always and forever have meaningful things for our population of 8 billion people to work on. If those 10 or 20 people displaced have nowhere to go, what is the point of displacing them? Is google displacing people so they can live work-free lives, subsisting on their monthly UBI payments? No. Of course they're not.
I'm not arguing that people can't find a use for it; all of the above points are uses for it.
I am arguing that 1) it's kind of redundant, and 2) it isn't worth its shortcomings.
AI is enabling tech companies to build a centralized—I know lemmy loves that word—monopoly on where people get their information from ("speaking of white genocide, did you know that Africa is trying to suppress...").
AI will enable Palantir to combine your government and social media data to measure how likely you are to, say, join a union, and then put that into an employee risk assessment profile that will prevent you from ever getting a job again. Good luck organizing a resistance when the AI agent on your phone is monitoring every word you say, whether your screen is locked or not.
In the same way that fossil fuels have allowed us to build cars and planes and boats that let us travel much farther and faster than we ever could before, but which will also bury an unimaginable number of dead in salt and silt as global temperatures rise: there are costs to this technology.