this post was submitted on 16 Oct 2025
61 points (100.0% liked)

Technology

40523 readers
259 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS
 

Bad news, baby. The New Yorker reports the rapid advance of AI in the workplace will create a “permanent underclass” of everyone not already hitched to the AI train.

The prediction comes from OpenAI employee Leopold Aschenbrenner, who claims AI will “reach or exceed human capacity” by 2027. Once it develops capacity to innovate, AI superintelligence will supersede even a need for its own programmers … and then wipe out the jobs done by everyone else.

Nate Soares, winner of “most sunshine in book title” and co-author of AI critique If Anyone Builds It, Everyone Dies suggests “people should not be banking on work in the long term”. Math tutors, cinematographers, brand strategists and journalists are quoted by the New Yorker, freaking out.

The consolation here is that if you are among those panicking about being forced into the permanent underclass, you are already in it. Inherited wealth makes more billionaires than entrepreneurship, the opportunity gap is growing; if your family don’t have the readies to fund your tech startup, media empire or eventual presidential ambitions, it’s probably because they were in a tech-displaced underclass, too.

top 16 comments
sorted by: hot top controversial new old
[–] tal@lemmy.today 17 points 1 day ago* (last edited 1 day ago) (1 children)

The prediction comes from OpenAI employee Leopold Aschenbrenner, who claims AI will “reach or exceed human capacity” by 2027.

I suppose that it depends on the metric you're using. There are some tasks at which humans are outperformed now.

But I am pretty comfortable saying that come January 2027, the great bulk of things that humans do will continue to not be able to be done by existing AI.

We aren't going to just tweak an existing LLM somewhere slightly, throw a bit more hardware at it, and get general intelligence.

[–] Powderhorn@beehaw.org 10 points 1 day ago (1 children)

As education and expectations of humans decline, it may be the case that LLMs are an "improvement" over human drones in the future not because the tech is getting better.

[–] Megaman_EXE@beehaw.org 3 points 1 day ago (1 children)

I think that's the scariest part. The idea that people will stop learning.

[–] Powderhorn@beehaw.org 3 points 1 day ago (1 children)

Learning is just a woke mindvirus.

[–] Megaman_EXE@beehaw.org 2 points 1 day ago (1 children)

Lol that could be the newest horror movie for Halloween. "Education on Elm Street".

[–] Powderhorn@beehaw.org 1 points 1 day ago

Makes for a good double feature with April the Fifteenth.

Leopold Aschenbrenner (born 2001 or 2002[1]) is a German artificial intelligence (AI) researcher and investor. He was part of OpenAI's "Superalignment" team before he was fired in April 2024 over an alleged information leak, which Aschenbrenner disputes. He has published a popular essay called "Situational Awareness" about the emergence of artificial general intelligence and related security risks.[2] He is the founder and chief investment officer (CIO) of Situational Awareness LP, a hedge fund investing in companies involved in the development of AI technology.

Wikipedia

So, I'm calling bullshit. I've read the papers, I've kept up on everything. I run AI models myself to keep up with everything, I've built my own agents and my own agentic workflows. It keeps coming back to a few big things that unless they've suddenly had another massive breakthrough - I don't see happening.

  • LLMs already have the vast majority of data associated with them, and they still hallucinate. The docs say that it will take exponentially more data to train them on a linear trajectory. So to get double the performance, we'd need the current amount of data squared.
  • LLMs and Agentic flows are very cool, and very useful for me. But they're incredibly unreliable. And it's just how models work - it's a black box. You can say "that didn't work" and it'll train next time that it was a bad option, but it's never going be zero. Businesses are learning (see Taco Bell and several others), that AI is not code. It is not true or false. It's probably true or probably false. That doesn't work when you're putting in an order or deciding how much money to spend.
  • We've certainly plateaued with AI for the time being. There will be more things that come out, but until the next major leap we're pretty much here. GPT5 proved that, it was mediocre, it was... the next version. They promised groundbreaking, but there just isn't any more ground to break with current AI. Like I said agents were kind of the next thing, and we're already using them.
[–] SpikesOtherDog@ani.social 14 points 2 days ago (1 children)

OpenAI employee Leopold Aschenbrenner, who claims AI will “reach or exceed human capacity” by 2027

Sure, but is that before or after full self driving cars?

[–] fullsquare@awful.systems 9 points 2 days ago (1 children)

right after fusion power goes commercial, which also will power it all

[–] SpikesOtherDog@ani.social 4 points 2 days ago (2 children)

I was expecting more progress after the two big fusion times recently. My question now is what is limiting these reactions.

[–] Powderhorn@beehaw.org 1 points 14 hours ago (1 children)

So far as I'm aware, magnetic fields. But that's for the best ... uncontrolled fusion doesn't sound like a good path forward.

[–] SpikesOtherDog@ani.social 1 points 10 hours ago

Technically correct, but in this case I'm wondering what is stopping fusion from continuing longer than about 20 minutes.

Is it like rail gun technology, where the hardware fails after so much use and there is no realistic way to keep it in tolerance?

Are we having trouble keeping a balance between fuel and the reaction, causing it to fizzle out or breach containment?

Is the fuel actually too expensive?

Is all this ok and we are now stuck with a scaling issue where the reaction can sustain itself, but we can't actually use it to boil more than a kettle.

[–] fullsquare@awful.systems 3 points 2 days ago

it's straight up still not good enough

[–] MelodiousFunk@slrpnk.net 12 points 2 days ago (1 children)

if you are among those panicking about being forced into the permanent underclass, you are already in it.

Well, yeah. And it's been like that for many generations. AI is just the next big thing. But it's nice to remind people every once in a while.

[–] Powderhorn@beehaw.org 1 points 14 hours ago

Sort of the reverse of Groucho having no interest in any organization that would have him as a member.

[–] CaptainBlinky@lemmy.myserv.one 3 points 1 day ago* (last edited 1 day ago)

People without money are replaceable. People with money are not. What a stupid statement. MONEY IS WORTH. You could be the most talented engineer in the world, the modern Einstein in science. Unless you have capital or create capital, you're an insect who will be squashed. I love the idea but AI and automation are about to make 90% of the world's population obsolete. And NOBODY is training LLMs on UBS or universal healthcare LMAO we're cooked and reporters know it, but there's no money in saying it, so even now the media is trying to milk the last possible money from this dying system.

I never did expect it to be Orwell x Terminator, but here we are. Enjoy the boot on our neck forever! I figure we give it like 6 years before eugenics come back because there's no need for so many proles in the new automated world. We'll just be resource drags that would be better replaced with renewable machines. Let's say 2030 before the mass killings begin. Great job MAGA.