this post was submitted on 27 Sep 2025
35 points (100.0% liked)

Technology

39864 readers
194 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 6 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] dastanktal@lemmygrad.ml 8 points 1 day ago (5 children)

If you've ever seen what AI tries to do when you ask it to code for you, you would understand why it's declining.

We just hired this new guy who uses AI for everything and often leads him astray and into arguments that he is definitely not prepared for.

[–] yogthos@lemmy.ml 8 points 1 day ago* (last edited 1 day ago) (4 children)

LLMs are a tool just like anything else, and they absolutely can be useful for coding tasks once you spend the time to actually learn how to use them effectively. It's certainly not a substitute for actually knowing what you're doing, but these tools have their applications. My experience is that as long as you give them small and focused tasks, they can accomplish them fairly consistently. I also find they're really handy for digging through code. If I'm looking at an unfamiliar code base, it's much easier to have the LLM find relevant parts of it that I need to change than to dig through the code myself. They're also pretty good at tracking down random bugs. Just the other day, I had a frontend bug where a variable was being destructured from a map twice, and it's a kind of thing that's easy to miss. The LLM saved me a whole bunch of time debugging by zeroing in on it.

[–] dastanktal@lemmygrad.ml 5 points 1 day ago (1 children)

Those are excellent use cases for AI, but it is also not a magic bullet. It cannot do everything for you, and often it can leave you a strike, especially if you're not willing to fact check it. It's a well-known fact that LLMs hallucinate, or straight up lie to you about what they know. So in many niche cases, which is what I am doing and what we hire this guy to do, it's often not effective. Just as often as it gives a silver bullet, it is often effectively wrong.

I have seen this dude use code and use AI to say things that are absolutely not true, like claiming setting a very high UID can resize a Docker image to an an absurd level nearing 500 gigabytes.

He also tried to use it to lecture me on how the auditors don't audit our company correctly and how we're actually doing things completely wrong and that he's the guy to fix it all and that'll take him just a little bit to train everybody up to shape.

LLM tools are excellent when treated with respect and the limitations of the tool is understood but unfortunately, far too many people believe it is a magic talking box that always knows everything and always has the right solution. 😮‍💨

I mean, this joker is so ridiculous that he can't even figure out how to use the AWS CLI correctly or how to setup "deploy" repo github keys. We asked him if he was comfortable working with puppet, or at least capable of figuring it out, and he looked like we asked him to touch a hot stove. Did I mention this joker has 15 years of experience doing stuff like this?

When I was looking at his code, it reeks of AI with anti-patterns I normally only see by strictly generated llm code.

[–] yogthos@lemmy.ml 3 points 1 day ago* (last edited 1 day ago)

I think we're in complete agreement here. These things are not magic, they're tools that have limitations. I also think they're best used by devs who already have a lot of experience. If you couldn't write the code yourself without the tool, you have no business using it. It can't do the thinking for you, and just because it sounds convincing that shouldn't be mistaken for it having any sort of intelligence of its own.

I've seen plenty of people do terrible things with LLMs as well. Honestly though, it's not that different from what I've seen people do manually. For example, I've seen many inexperienced devs just start adding kludges to their code instead of stepping back and rethinking the design to make an underlying problem go away more times than I care to count. LLMs just act as an accelerant here allowing people to make a bigger mess faster.

The fact that somebody with 15 years of experience would be so bad at coding is the real story here though. Reminds me how I interviewed a dev with supposed 5 years experience one time, and they couldn't figure out how to reverse a string cause they didn't know how loops worked. That kind of stuff really makes you wonder about the industry as a whole.

load more comments (2 replies)
load more comments (2 replies)