this post was submitted on 02 Oct 2025
28 points (96.7% liked)

Linux and Tech News

2318 readers
154 users here now

This is where all the News about Linux and Linux adjacent things goes. We'll use some of the articles here for the show! You can watch or listen at:

You can also get involved at our forum here on Lemmy:

Or just get the most recent episode of the show here:

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] ALoafOfBread@lemmy.ml -4 points 15 hours ago* (last edited 14 hours ago) (3 children)

Most of the article is paywalled, but the main points seem to be that AI work is less creative/lower quality & people spend more time fixing it than they would have making it.

That has not been my experience. On the one hand 'less creative' - that's true, I don't think LLMs can be creative. But they can summarize information or rephrase/expand on things I say based on provided context. So I spend much less time on formatting and draft creation for text based documents. I can have an agent draft things for me and then I just tidy up.

As for low quality work products, again, not my experience. I use agentic AI regularly to automate simple but repetitive business tasks that would take me much longer to write code to automate. I am not an engineer, I am an analyst/consultant. I can code some things, but it is often not worth the time investment (many tasks are one-offs, etc).

A friend of mine made an AI agent using an agent that can interpret pictures of charts and find supporting data in our databases (to find out what other teams referenced for their analyses)and/or make a copy of the chart and make modifications to it. Or it can create seaborn charts from text descriptions using data from our database. Now a team of non-technical users can make seaborn charts without having to know python. That is pretty powerful in terms of saving time & expanding productivity.

It's easy to shit on the tech, but it has legitimately useful applications that help productivity.

Edit: downvote if you want, but it is ignorant to say that LLMs only produce garbage. It very much depends on the user and on the application.

[–] quetzaldilla@lemmy.world 8 points 14 hours ago (1 children)

AI made a $2M mistake at the public accounting firm I worked at.

Management responded by blaming and firing an entire team for not double-checking the AI output, even though it was literally impossible for them to do so due to the volume of the output and lack of experience.

This will be you, sooner or later.

[–] ALoafOfBread@lemmy.ml -1 points 14 hours ago

I understand your perspective, but I do review the code. I also do extensive testing. I don't use packages I'm unfamiliar with. I still read the docs. I don't run code I don't understand.

Again, the quality of the output really comes down to the user and the application. It is still faster for me to do what I've outlined above and it makes automating some tasks worth it in terms of ROI that otherwise wouldn't be.

[–] SpaceNoodle@lemmy.world 3 points 14 hours ago (1 children)

"It's not garbage if I can't tell it's garbage!"

[–] ALoafOfBread@lemmy.ml 0 points 14 hours ago

No, literally nothing like what I said. It could still be garbage if you didn't understand or review the output. That's why you understand and review the output.

[–] Eheran@lemmy.world 2 points 13 hours ago

Lemmy is mostly anti LLM, hence the downvotes, regardless of how you use it.