I use the Jetbrains AI Chat with Claude and the AI autocomplete. I mostly use the AI as a rubber duck when I need to work through a problem. I don't trust the AI to write my code, but I find it very useful for bouncing ideas off of and getting suggestions on things I might have missed. I've also found it useful for checking my code quality but it's important to not just accept everything it tells you.
Programming
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
 - Keep content related to programming in some way
 - If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
 
Wormhole
Follow the wormhole through a path of communities !webdev@programming.dev
I am still relatively inexperienced and only embedded. (Electronics by trade) I am working on an embedded project with Zephyr now.
If I run into a problem I kind of do this method (e.g. trying to figure out when to use mutexes vs semaphores vs library header file booleans for checking ):
- 
first look in the zephyr docs at mutexes and see if that clears it up
 - 
second search ecosia/ddg for things like "Zephyr when to use global boolean vs mutex in thread syncing"
 - 
if none of those work, I will ask AI, and then it often gives enough context that I can see if it is logical or not (in this case, it was better to use a semi-global boolean to check if a specific thread had seen the next message in the queue, and protect the boolean with a mutex to know if that thread was currently busy processing the data), but then it also gave options like using a gate check instead of a mutex, which is dumb because it doesn't exist in zephyr.
 
For new topics if I can't find a video or application note that doesn't assume too much knowledge or use jargon I am not yet familiar with, I will use AI to become familiar with the basic concept in the terms so that I can then go on to other, better resources.
In engineering and programming, jargon is constant and makes topic introduction quite difficult if they don't explain it in the beginning.
I never use it for code with the exception of codebases that are ingested but with no documentation on all of the keys available, or like in zephyr where macro magic is very difficult to navigate to what it actually does and isn't often documented at all.
I’ve tried chat prompt coding two ways. One, with a language I know well. It didn’t go well; it insisted that an api existed that had been deprecated since before and removed around 2020, but I didn’t know that and I lost a lot of time. I also lost a lot of time because the code was generally good, but it wasn’t mine so I didn’t have a great understanding of how it flowed. I’m not a professional dev so I’m not really used to reading and expanding on others’ code. However, the real problem is that it did some stuff that was just not real, and it wasn’t obvious. I got it to write tests (something I have been meaning to learn to do) and every test failed; I’m not sure if it’s the test or the code because the priority for me at the time was getting code out, not the tests. I know, I should be better.
I’ve also used it with a language I don’t know well to accomplish a simple task - basically vibe coding. That went OK as far as functionality but based on my other experience it is illegible, questionably written, and not very stable code.
The idea that it’ll replace coders in a meaningful way is not realistic in the current level. My understanding of how LLMs work is incomplete, but i don’t think the hallucinations are easily overcome.
Snippets and architecture design ideas
Nope, nothing. I wanna have the ability to pinpoint what part of my code is causing what error and offshoring my work off AI would make it both more unfun and harder
I'm an electrical engineer that has become a proprietary cloud-tool admin. I occasionally use an LLM (chatGPT web) to write VBA code to do various API calls and transform excel/Jason/XML/CSV data from one format to another for various import/export tasks that would otherwise eat up my time.
I just use the chat, and copy/paste the code.
I spend an hour to meticulously describe the process I need the code to do, and then another hour or two testing, debugging and polishing, and get a result that would take me days to produce by myself. I then document the code (I try to use lots of sub-modules that can be reused) so that I can use the LLM less in the future.
I don't feel great about the environment impact, which is why I try to limit the usage, and do debugging and improvements by myself. I'm also trying to push management to invest in a lean LLM that runs on the companies servers. I'm also looking into getting a better PC privately, which I could also run a local LLM on and use for work.
My company has internally hosted AI. I use the web interface to copy/past info between it and my IDE. So far I have gotten best results from uploading the official Python documentation and the documentation for the framework I am using. I then specify my requirements, review the output, and either use the code or request a new revision with information on what I want it to correct. I generally focus on requesting smaller focused bits of code. Though that may be for my benefit so I can make sure I understand what everything is doing.
Ive used it when I’ve found myself completely stumped with a problem, and I don’t know exactly how to search for the solution. I’m building a macOS app, and unfortunately a lot of the search results are for iOS — even if I exclude iOS from the results (e.g. how to build a window with tabs (like safari tabs), but all results comes up for iOS’ TabView).
I use the generator function in Databricks for Python to save me a lot of typing but autocomplete functions drive me crazy
I sometimes use a chatbot as a search engine for poorly documented or otherwise hard to find functionality. Asking for how to do x in y usually points me to the right direction.
Do you consider environment damage that use of AIs can cause?
My use is so little that no. For AI bullshit in general, yes.
I use autocomplete, asking chat, and full agentic code generation with Cline & Claude.
I don't consider environmental damage by AI because it is a negligible source of such damage compared to other vastly more wasteful industries.
I am primarily interested in text generation. The only use I have for generated pictures, voices, video or music is to fuck around. I think I generated my D&D character portrait. My last portrait was a stick man.
What I ask it to do? My ChatGPT history is vast and covers everything from "how is this word used" to "what drinks can I mix given what's in my liquor cabinet" to "analyze code for me" to "my doctor ordered some labs and some came back abnormal, what does this mean? Does this test have a high rate I'd false positive?" to "someone wrote this at me on the internet, help me understand their point" to "someone wrote this at me in the internet, how can I tell them to fuck the fuck off.... nicely?" And I write atrocious fiction.
Oh I use it a lot to analyze articles and identify bias, reliability, look up other related articles, things that sound bad but really don't mean anything and point out gaps in the journalistic process (I.e. shoddy reporting).
I also have written a discord dungeon master bot. It works poorly due to aggressive censorship and slop on open AI.
At work, I still use JetBrains', including the in-line, local code completion. Though it (or rather the machines at work) are so slow, 99% of the time I've already written everything out before it can suggest something.
I use the JetBrain AI like a search engine when the web has no obvious answer. Most of time it gives me a good starting point, and the answer is adjusted to the existing content.
It can also translate snippets from one language or framework to another. For (a fake) example, translating from Unity in Python to Vulkan in C++.
I also use it to analyze shitty code from people who left the company a long time ago. Refactoring and cleaning obscure stuff like deeply hidden variables or things that would take days to analyze can be done in minutes.
I use it once a day at most.
Nobody uses "AI" because it doesn't exist.
Nobody in this thread is talking about any program that's remotely "intelligent".
As far as technologies falsely hyped as "AI", I use google's search summaries. It's usually quicker than clicking the actual sources, but I have that option as needed.