this post was submitted on 10 Jan 2025
77 points (96.4% liked)

Technology

60342 readers
4432 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
top 4 comments
sorted by: hot top controversial new old
[–] SnotFlickerman@lemmy.blahaj.zone 20 points 4 hours ago* (last edited 3 hours ago) (1 children)

You know it's funny, this keeps happening with every single fucking AI thing they produce. It always still needs humans fixing its mistakes because its just not reliable enough.

I think it's doing a lot less reducing headcount and a lot more making people specialize in what I would call "bullshit," to be able to fix mistakes made by an AI quickly and efficiently.

Maybe, just maybe, if they have to pay people to fix the AIs work that they can cut out the middleman and just pay the people to do the fucking job to begin with. No, what am I saying, that's just ridiculous! /s

[–] DScratch@sh.itjust.works 7 points 1 hour ago

Ah, but you see the AI let us reduce headcount for full time employees. Reducing the budget for full time salaries.

Now we just spend twice as much on contractors and consultants, but that’s a different budget, so it’s not my problem.

[–] NutinButNet@hilariouschaos.com 5 points 4 hours ago (1 children)

Yeah…what we have today as “AI” makes a ton of mistakes. Well, maybe not a ton, but enough that it cannot be relied on without a human to correct it.

I use it as a foundation at work.

ChatGPT, write me a script that does this, this, and that.

I often, like 98% of the time, won’t get what I asked for or will get something that it interpreted incorrectly. It’s common sense to me but maybe not to others to not run whatever it spits out blindly. Review what it outputted, then test it somewhere. I often create a similar file structure somewhere else and test there and then after a few tests and reviewing and making modifications, then I feel comfortable running whatever it spit out to me.

But I don’t think I’ll ever not double check whatever any type of AI spits out as a response to me for whatever I’ve asked. Humans should always have the last word before action, especially when it comes to healthcare.

[–] BrianTheeBiscuiteer@lemmy.world 4 points 3 hours ago

I wouldn't go so far as to say I have the opposite experience but it's been good for me when I treat it like a junior developer. If you give them freedom to come up with the solution they'll totally miss the point. If I give them direction on a small piece of functionality with clear inputs and outputs then they'll get 90% of the way there.

So far I think AI is a good way to reduce mundane work but coming up with ideas and concepts on it's own is a bridge too far. An example of this is a story I read about a kid committing suicide because of an AI driven fantasy. It was so focused on maintaining the fantasy it couldn't step back and say, "Whoa. This is a human being I'm talking to and they're talking about real self-harm. I think it's time to drop the act." This will result in people being treated as financial line items (moreso) and new avenues for cyber attacks.