this post was submitted on 04 Apr 2025
9 points (80.0% liked)

Futurology

2429 readers
337 users here now

founded 2 years ago
MODERATORS
top 4 comments
sorted by: hot top controversial new old
[–] monogram@feddit.nl 1 points 16 hours ago* (last edited 16 hours ago)

Fuck AI

First they invalidated writers work, to replace it with shit

Then they invalidated artist’s work, to replace it with shit

Now they’re attempting to kill off programming and replace it with vibers

[–] sylver_dragon@lemmy.world 3 points 1 day ago

Kirk: Scotty AI, give me all she's got!
Scotty AI: Aye captain, I'm draining the First Officer's bank account and transferring it into your hidden account.
Kirk: No, wait, stop.
Scotty AI: Powering down engines.

[–] Terrarium@hexbear.net 3 points 1 day ago

"AI" is just pattern recognition and reproduction models. They routinely fail at basic tasks. It is like having an incompetent junior dev that you have to watch like a hawk because not only do they write bad code, they routinely steal it from copyrighted sources. It helps bad coders pretend to be good ones because their code follows a popular pattern but it is often the wrong pattern for solving the problem - a problem they didn't actually think about because they thought "AI" would solve it.

The real business case for LLMs in tech is as a propaganda tool for disciplining labor. Tech labor is in demand and therefore expensive. Executives and managers would like to decrease those wages and "AI" provides a rhetorical means by which to do so, including justifying rounds of firings. Productivity will crash because they are literally just layoffs to make higher profits, not because tasks have been automated.

This aligns with the general expectation of a market crash, which has been brewing for years and is now being more or less intentionally created. The big companies are building up "war chests" so that they can scoop up companies that fail and increase their monopolies during the later bailout period.

[–] bad_news@lemmy.billiam.net 2 points 1 day ago

Let me know when a model can consistently parse semantic negation consistently.