this post was submitted on 12 Jul 2025
356 points (95.4% liked)

Programming

21650 readers
213 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] HaraldvonBlauzahn@feddit.org 9 points 6 days ago (1 children)

Now the interesting question is what it really means when less experienced programmers think they are 100% faster.

[–] Arghblarg@lemmy.ca 75 points 1 week ago (2 children)

I feel this -- we had a junior dev on our project who started using AI for coding, without management approval BTW (it was a small company and we didn't yet have a policy specifically for it. Alas.)

I got the fun task, months later, of going through an entire component that I'm almost certain was 'vibe coded' -- it "worked" the first time the main APIs were called, but leaked and crashed on subsequent calls. It used double- and even triple-pointers to data structures, which the API vendor's documentation upon some casual reading indicated could all be declared statically and re-used (this was an embedded system); needless arguments; mallocs and frees everywhere for no good reason (again due to all of the un-needed dynamic storage involving said double/triple pointers to stuff). It was a horrible mess.

It should have never gotten through code review, but the senior devs were themselves overloaded with work (another, separate problem) ...

I took two days and cleaned it all up, much simpler, no mem leaks, and could actually be, you know, used more than once.

Fucking mess, and LLMs (don't call it "AI") just allow those who are lazy and/or inexperienced to skate through short-term tasks, leaving huge technical debt for those that have to clean up after.

If you're doing job interviews, ensure the interviewee is not connected to LLMs in any way and make them do the code themselves. No exceptions. Consider blocking LLMs from your corp network as well and ban locally-installed things like Ollama.

[–] jonathan7luke@lemmy.zip 13 points 1 week ago

It should have never gotten through code review, but the senior devs were themselves overloaded with work

Ngl, as much as I dislike AI, I think this is really the bigger issue. Hiring a junior and then merging his contributions without code reviewing is a disaster waiting to happen with or without AI.

[–] umbraroze@slrpnk.net 11 points 1 week ago* (last edited 1 week ago)

It used double- and even triple-pointers to data structures

(old song, to the tune of My Favourite Things)

🎶 "Pointers to pointers to pointers to strings,
this code does some rather unusual things...!"
🎶

[–] Scrath@lemmy.dbzer0.com 39 points 1 week ago (2 children)

I talked to Microsoft Copilot 3 times for work related reasons because I couldn't find something in documentation. I was lied to 3 times. It either made stuff up about how the thing I asked about works or even invented entirely new configuration settings

[–] rozodru@lemmy.world 18 points 1 week ago (2 children)

Claude AI does this ALL the time too. It NEEDS to give a solution, it rarely can say "I don't know" so it will just completely make up a solution that it thinks is right without actually checking to see the solution exists. It will make/dream up programs or libraries that don't and have never existed OR it will tell you something can do something when it has never been able to do that thing ever.

And that's just how all these LLMs have been built. they MUST provide a solution so they all lie. they've been programmed this way to ensure maximum profits. Github Copilot is a bit better because it's with me in my code so it's suggestions, most of the time, actually work because it can see the context and whats around it. Claude is absolute garbage, MS Copilot is about the same caliber if not worse than Claude, and Chatgpt is only good for content writing or bouncing ideas off of.

[–] Croquette@sh.itjust.works 26 points 1 week ago (8 children)

LLM are just sophisticated text predictions engine. They don't know anything, so they can't produce an "I don't know" because they can always generate a text prediction and they can't think.

load more comments (8 replies)
load more comments (1 replies)
load more comments (1 replies)
[–] _cnt0@sh.itjust.works 34 points 1 week ago (2 children)

I'll quote myself from some time ago:

The entire article is based on the flawed premise, that "AI" would improve the performance of developers. From my daily observation the only people increasing their throughput with "AI" are inexperienced and/or bad developers. So, create terrible code faster with "AI". Suggestions by copilot are >95% garbage (even for trivial stuff) just slowing me down in writing proper code (obviously I disabled it precisely for that reason). And I spend more time on PRs to filter out the "AI" garbage inserted by juniors and idiots. "AI" is killing the productivity of the best developers even if they don't use it themselves, decreases code quality leading to more bugs (more time wasted) and reducing maintainability (more time wasted). At this point I assume ignorance and incompetence of everybody talking about benefits of "AI" for software development. Oh, you have 15 years of experience in the field and "AI" has improved your workflow? You sucked at what you've been doing for 15 years and "AI" increases the damage you are doing which later has to be fixed by people who are more competent.

[–] Kissaki@programming.dev 1 points 6 days ago (1 children)

from some time ago

It's a fair statement and personal experience, but a question is, does this change with tool changes and user experience? Which makes studies like OP important.

Your >95% garbage claim may very well be an isolated issue due to tech or lib or llm usage patters or whatnot. And it may change over time, with different models or tooling.

[–] _cnt0@sh.itjust.works 1 points 6 days ago

At this point I assume ignorance and incompetence of everybody talking about benefits of "AI" for software development.

[–] daniskarma@lemmy.dbzer0.com 28 points 1 week ago* (last edited 1 week ago) (2 children)

The study was centered on bugfixing large established projects. This task is not really the one that AI helpers excel at.

Also small number of participants (16) , the participants were familiar with the code base and all tasks seems to be smaller in completion time can screw results.

Thus the divergence between studio results and many people personal experience that would experience increase of productivity because they are doing different tasks in a different scenario.

[–] 6nk06@sh.itjust.works 44 points 1 week ago (1 children)

The study was centered on bugfixing large established projects. This task is not really the one that AI helpers excel at.

"AI is good for Hello World projects written in javascript."

Managers will still fire real engineers though.

[–] daniskarma@lemmy.dbzer0.com 6 points 1 week ago* (last edited 1 week ago)

I find it more useful doing large language transformations and delving into unknown patterns, languages or environments.

If I know a source head to toe, and I'm proficient with that environment, it's going to offer little help. Specially if it's a highly specialized problem.

Since SVB crash there have been firings left and right. I suspect AI is only an excuse for them.

[–] Feyd@programming.dev 22 points 1 week ago (1 children)

familiar with the code base

Call me crazy but I think developers should understand what they're working on, and using LLM tools doesn't provide a shortcut there.

[–] daniskarma@lemmy.dbzer0.com 7 points 1 week ago

You have to get familiar with the codebase at some point. When you are unfamiliar, in my experience, LLMs can provide help understanding it. Copying large portions of code you don't really understand and asking for an analysis and explanation.

Not so far ago I used it on assembly code. It would have taken ages to decipher what it was doing by myself. The AI sped up the process.

But once you are very familiar with a established project you had work a lot with, I don't even bother asking LLMs anything, as in my experience, I come up with better answers quicker.

At the end of the day we must understand that a LLM is more or less an statistical autocomplete trained on a large dataset. If your solution is not on the dataset the thing is not going to really came up with a creative solution. And the thing is not going to run a debugger on your code either, afaik.

When I use it the question I ask myself the most before bothering is "is the solution likely to be on the training dataset?" or "is it a task that can be solved as a language problem?"

[–] Phen@lemmy.eco.br 21 points 1 week ago (2 children)

Reading the paper, AI did a lot better than I would expect. It showed experienced devs working on a familiar code base got 19% slower. It's telling that they thought they had been more productive, but the result was not that bad tbh.

I wish we had similar research for experienced devs on unfamiliar code bases, or for inexperienced devs, but those would probably be much harder to measure.

[–] staircase@programming.dev 17 points 1 week ago (2 children)

I don't understand your point. How is it good that the developers thought they were faster? Does that imply anything at all in LLMs' favour? IMO that makes the situation worse because we're not only fighting inefficiency, but delusion.

20% slower is substantial. Imagine the effect on the economy if 20% of all output was discarded (or more accurately, spent using electricity).

[–] Phen@lemmy.eco.br 7 points 1 week ago

I'm not saying it's good, I'm saying I expected it to be even worse.

load more comments (1 replies)
[–] vrighter@discuss.tchncs.de 10 points 1 week ago

1% slowdown is pretty bad. You'd still do better just not using it. 19% is huge!

[–] WoodScientist@sh.itjust.works 21 points 1 week ago* (last edited 1 week ago) (6 children)

Don’t give yourselves to these unnatural men - machine men with machine minds and machine hearts! You are not machines! You are men!

load more comments (6 replies)
[–] dil@lemmy.zip 12 points 1 week ago

Would ai coders even get faster over time or just stay stagnant since they aren't learning anything about what they're doing

[–] Ptsf@lemmy.world 10 points 1 week ago (1 children)

People are a bad judge of their own skill and overrely on tools and assistants when present. See also: car adas systems making drivers less skillful. More news at 11.

[–] boonhet@sopuli.xyz 3 points 6 days ago

See also: car adas systems making drivers less skillful.

But also making traffic safer

Think we need to introduce a mandatory period where you need to drive an old car with no ABS when you've just gotten your license. I mean for me that was called being a broke-ass student, but nowadays cars with no ABS are starting to cost more than cars with ABS, traction control and even ESP, because the 80s and early 90s cars where these things were optional, are now classics, whereas you can get a BMW or Audi that was made this century for like 500-800 euros if you're brave or just want to move in to your garage full time.

[–] resipsaloquitur@lemmy.world 8 points 1 week ago (6 children)

Someone told me the best use of AI was writing unit tests and I died on the inside.

load more comments (6 replies)
[–] BrianTheeBiscuiteer@lemmy.world 6 points 1 week ago (1 children)

The only time it really helps me is when I'm following a pretty clear pattern and the auto-complete spares me from copy-pasting or just retyping the same thing over and over. Otherwise I'm double-checking everything it wrote, and I have to understand it to test it, and that probably takes most of my time. Furthermore, it usually doesn't take the entire codebase into account so it looks like it was written by someone who didn't know our team or company standards as well as our proprietary code.

load more comments (1 replies)
[–] Kolanaki@pawb.social 6 points 1 week ago (3 children)

🎵Had a great day out,

Callin' my name like Ferris Bueller,

Time to wrap this up,

I'm getting 19℅ slower! 🎵

[–] Doc_Crankenstein@slrpnk.net 2 points 6 days ago (1 children)

I am honestly shocked to see a reference in the wild to Ken Ashcorp.

[–] Kolanaki@pawb.social 2 points 6 days ago (1 children)

I'm honestly shocked that multiple people got the reference.

load more comments (2 replies)
load more comments
view more: next ›