this post was submitted on 22 Apr 2025
486 points (98.0% liked)

Technology

69211 readers
3851 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] keegomatic@lemmy.world 22 points 1 day ago* (last edited 1 day ago) (3 children)

This is an increasingly bad take. If you work in an industry where LLMs are becoming very useful, you would realize that hallucinations are a minor inconvenience at best for the applications they are well suited for, and the tools are getting better by leaps and bounds, week by week.

edit: Like it or not, it’s true. I use LLMs at work, most of my colleagues do too, and none of us use the output raw. Hallucinations are not an issue when you are actively collaborating with the model and not using it to either “know things for you” or “do the work for you.” Neither of those things are what LLMs are really good at, but that’s what most laypeople use them for, so these criticisms are very obviously short-sighted to those of us who have real-world experience with them in a domain where they work well.

[–] FunnyUsername@lemmy.world 26 points 1 day ago* (last edited 1 day ago) (2 children)

you're getting down voted because you accurately conceive of and treat LLMs the way they should be—as tools. the people down voting you do not have this perspective because the only perspective pushed to people outside of a technical career or research is "it's artificial intelligence and it will revolutionize society but lol it hallucinates if you ask it stuff". This is essentially propaganda because the real message should be "it's an imperfect tool like all tools but boy will it make getting a lot of certain types of work done way more efficient so we can redistribute our own efforts to other tasks quicker and take advantage of LLMs advanced information processing capabilities"

tldr: people disagree about AI/LLMs because one group thinks about them like Dr. Know from the movie A.I. and the other thinks about them like a TI-86+ on steroids

[–] KeenFlame@feddit.nu 2 points 18 hours ago

Well, there is also the group that thinks they are "based" "fire" and so on, like always, fanatics ruin everything. They aren't God, nor a plague. Find another interest if this bores you

[–] keegomatic@lemmy.world 2 points 1 day ago

Yep, you’re exactly right. That’s a great way to express it.

[–] CheeseNoodle@lemmy.world 10 points 1 day ago (1 children)

Oh we know the edit part, the problem is all the people in power trying to use it to replace jobs wholesale with no oversight or understanding that need a human to curate the output.

[–] keegomatic@lemmy.world 3 points 1 day ago

That’s not the issue I was replying to at all.

replace jobs wholesale with no oversight or understanding that need a human to curate the output

Yeah, that sucks, and it’s pretty stupid, too, because LLMs are not good replacements for humans in most respects.

we

Don’t “other” me just because I’m correcting misinformation. I’m not a fan of corporate bullshit either. Misinformation is misinformation, though. If you have a strong opinion about something, then you should know what you’re talking about. LLMs are a nuanced subject, and they are here to stay, for better or worse.