this post was submitted on 11 Sep 2025
189 points (96.1% liked)

Technology

75169 readers
2698 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] GnuLinuxDude@lemmy.ml 10 points 5 days ago (1 children)

Their primary use case in the office that I’ve seen is asking someone a question and having them send a LLM response where they clearly didn’t read what you asked and the response they sent you does not answer the original question. It’s so cool!

[–] gedaliyah@lemmy.world 7 points 5 days ago (1 children)

That has sorta been my experience so far. LLMs are great at producing output as long as the quality of the output doesn't really matter. Maybe there are a lot more tasks than I realize where this is the case - in my work there are not many.

[–] _g_be@lemmy.world 1 points 4 days ago

This is the entire point of LLM, it creates something that has the right 'shape' statistically of what you ask for, but the content is not guaranteed to be accurate, true, appropriate, or up to date.

So, if a random person asks for a legal document, and they receive something that "looks right" it is very impressive to them because they can't see the flaws that a professional or expert would see. And for some applications that's Good Enough, but it's nowhere near being PhD levels smart