this post was submitted on 24 Aug 2025
941 points (98.3% liked)

Microblog Memes

9038 readers
2586 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

Rules:

  1. Please put at least one word relevant to the post in the post title.
  2. Be nice.
  3. No advertising, brand promotion or guerilla marketing.
  4. Posters are encouraged to link to the toot or tweet etc in the description of posts.

Related communities:

founded 2 years ago
MODERATORS
 

Source.

TranscriptScreenshot of a Mastodon post by Kevin Beaumont: "Generative Al government lobbying."

Photo of AI/tech company CEO's, captioned:
We spent a Trillion on NVDA GPUs antide dont have any Al product you want.

Photo of a crying male, captioned:
Please like our Al bro This is the last time bro. So many possibilities bro. Its the future bro. Just need you to like it bro. We worked real hard bro. Our stockholders need this one bro.

you are viewing a single comment's thread
view the rest of the comments
[–] SomeAmateur@sh.itjust.works 126 points 6 days ago* (last edited 6 days ago) (8 children)

AI is a distruptive technology...just not in a way most people can use in a positive productive way. As entertainment it's a cool toy

But its best use right now is manipulating public opinion and creating convincing but non-legit material. As a propaganda tool it is a dream come true

[–] ch00f@lemmy.world 51 points 6 days ago (2 children)

I think it's cute when people say that some uses are good (cool toy) ignoring the fact that the current business model is dependent on it becoming much more than that.

The "AI is here to stay" crowd will evaporate as soon as they have to start paying.

[–] WoodScientist@lemmy.world 16 points 5 days ago (1 children)

The “AI is here to stay” crowd will evaporate as soon as they have to start paying.

And it goes further than that. Imagine if the hyperscalers actually succeed at their goals. They create truly useful agent models. But crucially to their profits, these models can only be built at scale. Their business model fails if someone learns to duplicate their work on sane levels of compute. So let's say everything goes OpenAI's way. They invent truly useful models, and the tech can only work at the massive scale they've invested in, so they don't have to worry about being cut off at the knees.

Now imagine you're a knowledge worker. A programmer. An engineer. An analyst. An editor. Really any job that is done sitting at a keyboard, manipulating data in one form or another. Now imagine you're a knowledge worker and you adopt these new models. You become dependent on them. Less and less of your technical skills actually reside in your own mind. The difference between your knowledge base and that of any rando off the street is now less and less. At some point, you're completely dependent on them to do the most basic functions of your job. Why shouldn't OpenAI charge you or your employer a license fee equal to half your salary? Why shouldn't your boss respond by cutting your salary in half and paying for the LLM license that way?

If the great fever dreams of the hyperscalers comes true, basically every white collar employee in the country sees their labor bargaining power collapse. It would be like the decline of weaving as a profession in England at the start of the Industrial Revolution, which spawned the historical Luddite movement. What was once a skilled profession requiring years of formal apprenticeship became mechanized low-skilled labor that could be (and often was) done by literal children. In OpenAI's ideal future, that is what will happen to anyone that currently makes a living working at a keyboard.

It would devalue labor by making it less specialized. The real skill still present would be those who are just good at carefully stating prompts to the god machine. And if you're good at stating prompts towards one end, you're likely good at stating prompts across many domains. If we really had the kind of LLMs that these companies dream of creating? Prompt engineering would become a mandatory class in high school. You wouldn't be able to graduate high school without learning how to use them. It would be as critical a skill as writing. But it would be a skill that everyone possessed, and thus of little ability to command a decent living from.

[–] hitmyspot@aussie.zone 4 points 5 days ago (3 children)

And just as the end of the loom was a good thing in the end, so would ai that is that powerful. It would be hugely disruptive of course. Self driving, ai (LLMs), agi, robotics all have the potential to put huge amounts of people out of work, relatively quickly. If we allow the companies to have power, they will take the benefits for themselves. If we take it for society, we all win. This is why it’s so important we have co petition and also why it’s important we seriously talk about things like minimum wages, living wages and UBI. When the jobs are already gone, it’s already too late,

[–] TipsyMcGee@lemmy.dbzer0.com 10 points 5 days ago

if you dedicate society to inventing a technology that makes people superfluous, it makes zero sense to keep them alive. It’s not a viable political project

[–] thedruid@lemmy.world 4 points 5 days ago

Tech that harms us never good

A. I is harmful to every thing humans touch

As is usual with anything humans touch

[–] SugarCatDestroyer@lemmy.world 4 points 5 days ago

Yes, only this income will be digital and every purchase you make will be tracked, as a result at any moment they can simply block your account for disobedience or suspicious activity and eventually you will die of hunger, unless you rob someone.

[–] JackbyDev@programming.dev 5 points 6 days ago (2 children)

They'll just throw ads in it like they always do.

[–] ch00f@lemmy.world 19 points 5 days ago* (last edited 5 days ago) (1 children)

Ads can barely cover video streaming services. Running an LLM costs orders of magnitude more. Even the paid tiers are starting to throttle usage.

[–] JackbyDev@programming.dev 2 points 5 days ago (1 children)

Why do you think they won't put ads in the paid tiers? They do that with virtually everything else.

[–] ch00f@lemmy.world 15 points 5 days ago* (last edited 5 days ago) (1 children)

They can try, but it won't be enough. The AI bubble is currently predecated on removing $50k a year employees. Nothing short of that is profitable. Everything we have now is being funded by that bet.

[–] thedruid@lemmy.world 4 points 5 days ago

You're mistaken. That salary target is much higher.

They want to replace anyone not inbyhe c suite e

[–] valkyre09@lemmy.world 7 points 6 days ago (1 children)

Sure thing I can help you put together a 5 point plan on how to take over the world, but first here’s a word from our sponsor, NordVPN

[–] JackbyDev@programming.dev 6 points 5 days ago

More like

Let me help you make a five point plan with Trello. Open an account and make a new task

  1. Use NordVPN to protect browsing online
[–] Thekingoflorda@lemmy.world 27 points 6 days ago (1 children)

State actors used to have to have a bunch of people in an office to spread propaganda, now they can be so much more “effective” by running 1 server farm.

[–] Lodespawn@aussie.zone 1 points 5 days ago (1 children)

Yeah, setting up a secure, reliable and maintained server farm with working software and a functional upgrade plan for both software and hardware is going to cost at least as much as an office full of skilled people. Especially given you still need skilled people to provide input and interpret output, but now you also have no work for newbies to train their skills.

[–] Thekingoflorda@lemmy.world 2 points 5 days ago

Train their skills in what? Spreading misinformation?

I think one server farm with LLM can output hundreds time more crap then a full office can. Misinformation doesn’t have to be quality as long as it is repeated so often that dumb people start taking it as a fact.

[–] bulwark@lemmy.world 25 points 6 days ago (1 children)

Yeah, I think the main thing it's accomplished so far is making everyone doubt the authenticity of literally all digital media.

[–] TipsyMcGee@lemmy.dbzer0.com 2 points 5 days ago

Yeah, it really poisoned the well for any human interaction on the internet and any form of creativity. I wonder if having everyone question what is real is a side effect or the very point.

And now, even if you write something people think is funny, good, poignant, whatever, they’re thinking if there’s anything uniquely human about it that makes it so an AI couldn’t have generated it. Human expression is basically dead already

[–] mitch@piefed.mitch.science 18 points 6 days ago

It's like a monkeys paw bit.

WISH: I want a piece of software that can look at ANYTHING and then describe it in real-time detail for blind or hard of seeing people.

REALITY: Blind people can navigate streets but also now it's possible to conduct fraud on a cosmic level.

[–] some_designer_dude@lemmy.world 18 points 6 days ago

In this sense, it’s actually just highlighting how easily manipulated everything has always been. The printing press made it easy to duplicate the shit out of whatever words you wanted, and AI just made it way more attainable than it’s ever been to create even higher fidelity falsehoods. The internet itself should take the “blame” for the proliferation of propaganda.

But the root problem is the ruling class making it as hard possible to get a good education unless you’re already one of them. An educated populace would be far more resistant to all this.

[–] breecher@sh.itjust.works 6 points 5 days ago* (last edited 5 days ago)

But its best use right now is manipulating public opinion and creating convincing but non-legit material. As a propaganda tool it is a dream come true

And that is why it is here to stay. Lots of these AI startups may crash and burn when/if the bubble bursts, especially "this is a cool toy" companies. But companies making disinformation and surveillance their main AI focus will become huge.

[–] SugarCatDestroyer@lemmy.world 5 points 5 days ago

Well, who was funding this shit, the big corporations, right? AI is a tool of control and power, and that's essentially why they were pouring crazy amounts of money into trying to find a way to control everyone.

[–] Delphia@lemmy.world 2 points 5 days ago

It has so many potential good uses for society as a whole but they led with the predatory shit.