I was going to say "Who?" until I looked at his bio, he helped start Django which I use. I need to go lay down.
Programming
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities !webdev@programming.dev
Had a subscription, unsubscribed 6 months ago. Simplistically:
- They create bad code,
- You stop learnng. You want to program? Learn.
Ignore the “AGI” hype—LLMs are still fancy autocomplete. All they do is predict a sequence of tokens—but it turns out writing code is mostly about stringing tokens together in the right order, so they can be extremely useful for this provided you point them in the right direction.
I'm just super happy to see someone talking about LLMs realistically without the "AI" bullshit.
If LLMs help people code, then that's great. But stop with the hype, grifting, etc. Kudos to this author for a reasonable take. Extremely rare.
I'm on the fence.
I've used Perplexity to take a javascript fragment, identify the language it was written in and describe what it's doing. I then asked it to refactor it into something a human could understand. It nailed both of these, even the variable names were meaningful (the original ones were just single letters). I then asked it to port it to C and use SDL, which it did a pretty good job of.
I also used it to "untangle" some really gnarly mathy Javascript and port it to C so I could better understand it. That is still a work in progress and I don't know enough math to know if it's doing a good job or not, but it'll give me some ability to work with the codebase.
I've also used it to create some nice helper python scripts like pulling all repositories from a github user account or using YouTube's API to pull the video title and author data if given a URL. It also wrote the skeleton of some Python scripts which interact with a RESTful API. These kinds of things it excelled at.
My most recent success was using it to decode DTMF in a .WAV file, then create a new .WAV file using the DTMF start/end times to create cue points to visually show me what it saw and where. This was a mixed bag: I started out with Python, it used FFT (which was the obvious but wrong choice), then I had it implement a Goertzel filter which it did flawlessly. It even ported over to C without any real trouble. Where it utterly failed was with the WAV file creation/cue points. Part of this is because cue points are rather poorly described in any RIFF documentation, the python wrapper for the C wave processing library was incomplete and even then, various audio editors wanting the cue data in different ways, but this didn't stop the LLM from lying through its damn teeth about not only knowing how to implement it, but assure me that the slop it created functioned as expected.
I've found that it tends to come apart at the seams with longer sessions. When its answers start being nonsensical I sometimes get a bit of benefit from starting over without all the work leading up to that point. LLMs are really good at churning out basic frameworks which aren't exactly difficult but can be tedious. I then take this skeleton and start hanging the meat on it, occasionally getting help from the LLM but usually that's the stuff I need to think about and implement. I find that this is where LLMs really struggle, and I waste more time trying to explain what I want to the LLM than if I just wrote it myself.
I’ve almost completely stopped using them, unless I’m stuck at a dead end. In the end all they have done is slow me down and make me unable to think properly anymore. They usually write way too much code, especially with tab complete stuff, resulting in me needing to delete code after hitting tab (what’s the point even, intellisense has always been really good and now it’s somehow worse). They’re usually wrong unless prompted multiple times. People say you can use them to generate boilerplate, but just use a language with less or no boilerplate like Kotlin. There’s usually very subtle bugs they introduce or they’re solving a problem that is simply documented on stack overflow, while I wouldn’t be using an LLM if I could just kagi it, so they solve the wrong thing.
One thing it’s decent for, if you don’t care about code quality, is converting code to a language you do not know. You’re not going to end up with good idiomatic code at the end, but it will probably function.
None of this is to say that the LLMs aren’t amazing, but if you start to depend on them you very very quickly realize that your ability to solve more complex problems will atrophy. And then when you get to a difficult problem you now waste much more time trying to solve a problem that might have been simpler for past you.
My 2cents
It's also trained on other people's code, it may use outdated, inefficient, or otherwise bad code. If it would be trained on my code, I'd like it much more
It's funny, to me I've had an llm give me the wrong answer to questions every time.
The first time I couldn't remember how to read a file as a string in python and it got me most of the way there. But I trusted the answer thinking "yeah, that looks right" but it was wrong, I just got the io class I didn't call the read() function.
The other time it was an out of date answer. I asked it how to do a thing in bevy and it gave me an answer that was deprecated. I can sort of understand that though, bevy is new and not amazingly documented.
On a different note, my senior who is all PHP, no python, no bash, has used LLM's to help him write python and bash. It's not the best code, I've had to do optimisations on his bash code to make it run on CI without taking 25 minutes, but it's definitely been useful to him with python and bash, he was hired as a PHP dev.
Your problem is you don't understand how llms work. You treat it like a magic genie when its not. Treat it right and you can fly. I integrated a new messaging architecture into my stack the other day that would of taken me weeks before. But I isolated my problem set and targeted what I needed to target. But I also understand what to tell it and how to utilize it as a tool.
In your case, its trivial to just check the methods of the class or know that its a call you're accessing in the first place. That AI can't read your mind if you don't frame the problem correctly.
My experience is that use of an LLM is an amplifier to your output but generally at no better quality that you can produce on your own.
The skilled developer who uses an LLM and checks its work will get a productivity boost without a loss in quality.
The unskilled developer who copy/pastes code from stackover can get even more sloppy code into production by using an LLM.
It took me a bit of time to figure out what an llm is good at. Because for most tasks it's just a plain waste of time
I've recently been asked to help some folks on the biz side automate tasks "with ai" and I've had very good results having gemini write apps script to do automations via google sheets.
For example HR wanted this for workshops they run "Write an Apps Script function to use the employee email list on sheet2 to create random teams of 4 and record the teams on sheet3. Use the template on sheet1 to write an email notification to each team and then send it."
Worked on the first try and the code is decent.
Google tools working well with google tools. That's a handy!
If I'm doing something in a language I only half way know and rarely use in depth I'll use them more. Like bash scripting I use them all the time. For Java I basically never touch them because I don't need them.
I mainly use it to create boilerplate (like adding a new REST API endpoint), or where I'm experimenting in a standalone project and am not sure how to do something (odd WebGL shaders), or when creating basic unit tests.
But letting it write, or rewrite existing code is very risky. It confidently makes mistakes, and rewrites entire sections of working code, which then breaks. It often goes into a "doom loop" making the same mistakes over and over. And if you tell it something it did was wrong and it should revert, it may not go back to exactly where you were. That's where frequently snapshotting your working code into git is essential, and being able to reset multiple files back to a known state will save your butt.
Just yesterday, I had an idea for a WebGL experiment. Told it to add a panel to an existing testing app I run locally. It did and after a few iterations, got it working. But three other panels stopped working, because it decided to completely change some unrelated upstream declarations. Took 2x time to put everything back to where it was.
Another thing to consider is that every X units of time, you'll want to go back and hand edit the generated material to clean up sloppy code. For example, inefficient data structures, duplicate functions in separate sections, unnecessarily verbose and obvious comments, etc. Also, better if using mature tech (with lots of training examples) vs. a new library or language.
If just starting out, I would not trust AI or vibe coding. Build things by hand and learn the fundamentals. There are no shortcuts. These things may look like super tools, but they give you a false sense of confidence. Get the slightest bit complex, and they fall apart and you will not know why.
Mainly using Cursor. Better results with Claude vs other LLMs, but still not perfect. Paid versions of both. Have also tried Cline with local codegen through Llama and Qwen. Not as good. Claude Code looks decent, but the open-ended cost is too scary for indie devs, unless you work for a company with deep pockets.