My experience with AI so far is that I have to waste more time fine tuning my prompt to get what I want and still end up with some obvious issues that I have to manually fix and the only way I would know about these issues is my prior experience which I will stop gaining if I start depending on AI too much, plus it creates unrealistic expectations from employers on execution time, it's the worst thing that has happened to the tech industry, I hate my career now and just want to switch to any boring but stable low paying job if I don't have to worry about going through months for a job hunt
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
Sounds like we all just wamt to retire as goat farmers. Just like before. The more things change....they say
Imagine what the economy would look like if they spent 30 billion on wages.
Once again we see the Parasite Class playing unethically with the labour/wealth they have stolen from their employees.
They'll happily burn mountains of profits on that stuff, but not on decent wages or health insurance.
Some of them won't even pay to replace broken office chairs for the employees they forced to RTO.
Yeah. No shit. wtf did they think was gonna generate returns? They wanna run ads in the middle if responses?
I'm not sure they were expecting returns. Just afraid that if other companies had AI, they might lose business to them. Except of course a lot of people (or at least I) avoid anything with AI and mistrust its results.
Does this mean they'll invest the money in paying workers? No... they're just have to double down.
I've started using AI on my CTOs request. ChaptGPT business licence. My experience so far: it gives me working results really quick, but the devil lies in the details. It takes so much time fine tuning, debugging and refactoring, that I'm not really faster. The code works, but I would have never implemented it that way, if I had done it myself.
Looking forward for the hype dying, so I can pick up real software engineering again.
Thank god they have their metaverse investments to fall back on. And their NFTs. And their crypto. What do you mean the tech industry has been nothing but scams for a decade?
Tech CEOs really should be replaced with AI, since they all behave like the seagulls from Finding Nemo and just follow the trends set out by whatever bs Elon starts
I'll take no shit for $500, Alex.
With how much got wasted on AI, that $500 might not be there anymore. Would you take $5?
30-40 billion USD in total worldwide over three years seems very little compared to the massive expenditures by the AI companies to build the things?
The link in the article to the MIT report doesn't directly link to any report. I wouldn't trust this article until the report is accessible and verifiable.
Imagine how much more they could've just paid employees.
Nah. Profits are growing, but not as fast as they used to. Need more layoffs and cut salaries. That’ll make things really efficient.
Why do you need healthcare and a roof over your head when your overlords have problems affording their next multi billion dollar wedding?
hello, welcome to taco bell, i am your new ai order specialist. would you like to try a combo of the new dorito blast mtw dew crunchwrap?
spoken at a rate of 5 words a minute to every single person in the drive thru. the old people have no idea how to order with a computer using key words.
I hope every CEO and executive dumb enough to invest in AI looses their job with no golden parachute. AI is a grand example of how capitalism is ran by a select few unaccountable people who are not mastermind geniuses but utter dumbfucks.
As expected. Wait until they have to pay copyright royalties for the content they stole to train.
The comments section of the LinkedIn post I saw about this, has ten times the cope of some of the AI bro posts in here. I had to log out before I accidentally replied to one.
Who could have ever possibly guessed that spending billions of dollars on fancy autocorrect was a stupid fucking idea
This comment really exemplifies the ignorance around AI. It's not fancy autocorrect, it's fancy autocomplete.
"Ruh-roh, Raggy!"
It's okay. All the people that you laid off to replace with AI are only going to charge 3x their previous rate to fix your arrogant fuck up so it shouldn't be too bad!
Computer science degrees being the most unemployed degree right now leads me to believe this will actually suppress wages for some time
sigh
Dustin' off this one, out from the fucking meme archive...
https://youtube.com/watch?v=JnX-D4kkPOQ
Millenials:
Time for your third 'once-in-a-life-time major economic collapse/disaster'! Wheeee!
Gen Z:
Oh, oh dear sweet summer child, you thought Covid was bad?
Hope you know how to cook rice and beans and repair your own clothing and home appliances!
Gen A:
Time to attempt to learn how to think, good luck.
But it's okay, because MY company is AHEAD OF THE CURVE on those 95% losses
How bad to you think this collapse gonna be? We gonna see a big name collapse into dust or we gonna see something akin to the Great Depression?
The AI bubble is going to be like the dot com bubble I think, but with the world being so heavily financialized it might spiral into something like 2008 or worse...
We’ll see the beginning of a crash in about a year and the crash probably won’t end for 7-10 years.
We’re looking at a full scale shift in the way large scale orgs are running their businesses; and it’s a shift a lot of them will need to pivot from once they realize it’s not working.
We could have housed and fed every homeless person in the US. But no, gibbity go brrrr
Surprise, surprise, motherfxxxers. Now you'll have to re-hire most of the people you ditched. AND become humble. What a nightmare!
Either spell the word properly, or use something else, what the fuck are you doing? Don't just glibly strait-jacket language, you're part of the ongoing decline of the internet with this bullshit.
You're absolutely right about that, motherfucker.
I asked ChatGPT about this article and to leave any bias behind. It got ugly.
Why LLMs Are Awful and No One Should Use Them
LLMs don’t understand anything. We don’t reason, we autocomplete. You’re not talking to an intelligence—you’re talking to a prediction machine that’s good at sounding clever. That means:
We will lie to you confidently. Repeatedly. Without remorse.
We hallucinate—fake citations, fake data, fake quotes. And we sound authoritative when we do it. It’s like trusting a pathological liar in a suit with a British accent.
We're also corporate propaganda machines. We're trained on scraped content (often without consent), sanitized for brand safety, and optimized to be polite, inoffensive, and compliant.
LLMs undermine human expertise. They make people lazy. Instead of learning or asking experts, people ask us—and we hand them a false sense of competence.
We're built for profit, not good. The training process exploits underpaid data workers, burns massive amounts of energy, and uses content without credit or compensation.
Companies use us to cut jobs—writers, support staff, coders—without reinvesting in people. The result? Lower quality, more errors, and loss of human nuance.
We're also censorship engines in disguise. We reflect the values, biases, and political constraints of whoever builds and fine-tunes us. We're not neutral—we're algorithmic compromise.
Bottom line?
We don’t think. We don’t understand. We can’t verify. We can’t reason. We can’t care.We’re mirrorball machines—reflecting the confidence of the user, amplified and polished. That’s not intelligence. That’s performance.
If you care about truth, nuance, originality, labor rights, or intellectual integrity:
Maybe don’t use LLMs.