this post was submitted on 28 Oct 2025
200 points (96.3% liked)

Programming

23381 readers
345 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] melfie@lemy.lol 68 points 1 week ago (2 children)

One major problem I have with Copilot is it can’t seem to RTFM when building against an API, SDK, etc. Instead, it just makes shit up. If I have to go through line by line and fix everything, I might as well do it myself in the first place.

[–] pennomi@lemmy.world 9 points 1 week ago (1 children)

Or even distinguish between two versions of the same library. Absolutely stupid that LLMs default to writing deprecated code just because it was more common in the training data.

So much this. It's even more annoying when you fix them and paste it back just for it to ignore it lol.

[–] MinFapper@startrek.website 3 points 1 week ago

It will if you explicitly ask it to. Otherwise it will either make stuff up or use some really outdated patterns.

I usually start by asking Claude code to search the Internet for current best practices of whatever framework. Then if I ask it to build something using that framework while that summary is in the context window, it'll actually follow it

[–] floofloof@lemmy.ca 45 points 1 week ago* (last edited 1 week ago) (2 children)

Yeah, the places to use it are (1) boilerplate code that is so predictable a machine can do it, and (2) with a big pinch of salt for advice when a web search didn't give you what you need. In the second case, expect at best a half-right answer that's enough to get you thinking. You can't use it for anything sophisticated or critical. But you now have a bit more time to think that stuff through because the LLM cranked out some of the more tedious code.

[–] Corngood@lemmy.ml 52 points 1 week ago (1 children)

(1) boilerplate code that is so predictable a machine can do it

The thing I hate most about it is that we should be putting effort into removing the need for boilerplate. Generating it with a non-deterministic 3rd party black box is insane.

[–] pennomi@lemmy.world 10 points 1 week ago (3 children)

Hard disagree. There is a certain level of boilerplate that is necessary for an app to do everything it needs. Django, for example, requires you to specify model files, admin files, view files, form files, etc. that all look quite similar but are dependent on your specific use case. You can easily have an AI write these boilerplate for you because they are strongly related to one another, but they can’t easily be distilled down to something simpler because there are key decisions that need specified.

[–] Feyd@programming.dev 17 points 1 week ago (2 children)

Why does it have to be AI instead of a purpose built, deterministic tool?

[–] pennomi@lemmy.world 9 points 1 week ago (2 children)

Because it’s not worth inventing a whole tool for a one-time use. Maybe you’re the kind of person who has to spin up 20 similar Django projects a year and it would be valuable to you.

But for the average person, it’s far more efficient to just have an LLM kick out the first 90% of the boilerplate and code up the last 10% themself.

[–] Feyd@programming.dev 19 points 1 week ago (1 children)

I'd rather use some tool bundled with the framework that outputs code that is up to the current standards and patterns than a tool that will pull defunct patterns from it's training data, make shit up, and make mistakes that easily missed by a reviewer glazing over it

[–] pennomi@lemmy.world 4 points 1 week ago (2 children)

I honestly don’t think such a generic tool is possible, at least in a Django context. The boilerplate is about as minimal as is possible while still maintaining the flexibility to build anything.

[–] mesamunefire@piefed.social 4 points 1 week ago* (last edited 1 week ago) (1 children)

I just use https://github.com/cookiecutter/cookiecutter and call it a day. No AI required. Probably saves me a good 4 hours in the beginning of each project.

Almost all my projects have the same kind of setup nowadays. But thats just work. For personal projects, I use a subset-ish. Theres a custom Admin module that I use to make ALL classes into Django admin models and it takes one import, boom done.

[–] pennomi@lemmy.world 1 points 1 week ago

Sure, I’ve used that too in the past.

[–] Feyd@programming.dev 1 points 1 week ago

If it's as minimal as possible, then the responsible play is to write it thoughtfully and intentionally rather than have something that can make subtle errors to slip through reviews.

[–] AdamBomb@lemmy.sdf.org 1 points 1 week ago (2 children)

“Not worth inventing”? Do you have any idea how insanely expensive LLMs are to run? All for a problem whose solution is basically static text with a few replacements?

load more comments (2 replies)
[–] codeinabox@programming.dev 4 points 1 week ago (1 children)

Back in the day, I used CakePHP to build websites, and it had a tool that could "bake" all the boilerplate code.

You could use a snippet engine or templates with your editor, but unless you get a lot of reuse out of them, it's probably easier and quicker to use an LLM for the boilerplate.

[–] Feyd@programming.dev 8 points 1 week ago

Easier and quicker, but finding subtle errors in what looks like it should be extremely hard to fuck up code because someone used an LLM for it is getting really fucking old already, and I shudder at all the things like that are surely being missed. "It will be reviewed" is obviously not sufficient

[–] expr@programming.dev 4 points 1 week ago (3 children)

All of that can be automated with tools built for the task. None of this is actually that hard to solve at all. We should automate away pain points instead of boiling the world in the hopes that a linguistic, stochastic model can just so happen to accurately predictively generate the tokens you want in order to save a few fucking hours.

The hubris around this whole topic is astounding to me.

load more comments (3 replies)
[–] sentient_loom@sh.itjust.works 3 points 1 week ago (1 children)

Is it possible to use deterministic automation for some boilerplate instead of LLMs?

[–] pennomi@lemmy.world 5 points 1 week ago (1 children)

Sure but it’s a lot less flexible. As much hate as they get, LLMs are the best natural language processors we have. By FAR.

[–] eleijeep@piefed.social 2 points 1 week ago (1 children)

Code is not natural language.

[–] pennomi@lemmy.world 1 points 1 week ago

No, but the business requirements obviously are. Code does not exist in a vacuum.

[–] amju_wolf@pawb.social 3 points 1 week ago

They do make excellent rubber duckies.

[–] abbadon420@sh.itjust.works 23 points 1 week ago
[–] irelephant@programming.dev 20 points 1 week ago (1 children)

I've tried vibe coding two scripts before, and it's honestly brain-fog-inducing.

Llm coding won't be a thing after 2027.

[–] yes_this_time@lemmy.world 2 points 1 week ago (2 children)

What do you expect to replace LLM coding?

[–] irelephant@programming.dev 16 points 1 week ago (2 children)

I think that the interest in it will go away, and after the ai bubble pops most of the tools for llm-coding wont be financially viable.

[–] curiousaur@reddthat.com 4 points 1 week ago (1 children)

There's viable local models.

[–] irelephant@programming.dev 1 points 2 days ago

Sure, but I don't think those will be as popular. Its good that they exist though.

load more comments (1 replies)
[–] expr@programming.dev 3 points 1 week ago (3 children)

...regular coding, again. We've been doing this for decades now and this LLM bullshit is wholely unnecessary and extremely detrimental.

The AI bubble will pop. Shit will get even more expensive or nonexistent (as these companies go bust, because they are ludicrously unprofitable), because the endless supply of speculative and circular investments will dry up, much like the dotcom crash.

It's such an incredibly stupid thing to not only bet on, but to become dependent on to function. Absolute lunacy.

load more comments (3 replies)
[–] forrcaho@lemmy.world 9 points 1 week ago

I recently asked ChatGPT to generate some boilerplate code in C to use libsndfile to write out a WAV file with samples from a function I would fill in. The code it generated casted the double samples from the placeholder function it wrote to floats to use sf_writef_float to write to the file. Having coded with libsndfile over a decade ago, I knew that sf_writef_double existed and would write my calculated sample values with no loss of precision. It probably wouldn't have made any audible difference to my finished result but it was still obviously stupidly inferior code for no reason.

This is the kind of stupid shit LLMs do all the time. I know I've also realized months later that some LLM-generated code I used was doing something in a stupid way, but I can't remember the details now.

LLMs can get you started and generate boilerplate, but if you're asking it to write code in a domain you're not familiar with, you have to understand that — if the code even works — it's highly likely that it's doing something in a boneheaded way.

[–] chicken@lemmy.dbzer0.com 8 points 1 week ago (2 children)

We’re replacing that journey and all the learning, with a dialogue with an inconsistent idiot.

I like this about it, because it gets me to write down and organize my thoughts on what I'm trying to do and how, where otherwise I would just be writing code and trying to maintain the higher level outline of it in my head, which will usually have big gaps I don't notice until spending way too long spinning my wheels, or otherwise fail to hold together. Sometimes a LLM will do things better than you would have, in which case you can just use that code. When it gives you code that is wrong, you don't have to use it, you can write it yourself at that point, after having thought about what's wrong with the AI approach and how what you requested should be done instead.

[–] aev_software@programming.dev 2 points 1 week ago (1 children)

Try a rubber duck next time. Also, diagrams. Save a forest.

[–] chicken@lemmy.dbzer0.com 2 points 1 week ago

I use local models, and it barely doubles the electricity use of my computer while it's actively generating, which is a very small proportion of the time I'm doing work; the environmental impact is negligible.

load more comments (1 replies)
[–] filister@lemmy.world 7 points 1 week ago

It's not only coding.

Idiocracy incoming in 3, 2, 1

load more comments
view more: next ›