this post was submitted on 16 Sep 2025
233 points (100.0% liked)

Ask Lemmy

34627 readers
2268 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

If AI ends up running companies better than people, won’t shareholders demand the switch? A board isn’t paying a CEO $20 million a year for tradition, they’re paying for results. If an AI can do the job cheaper and get better returns, investors will force it.

And since corporations are already treated as “people” under the law, replacing a human CEO with an AI isn’t just swapping a worker for a machine, it’s one “person” handing control to another.

That means CEOs would eventually have to replace themselves, not because they want to, but because the system leaves them no choice. And AI would be considered a "person" under the law.

top 50 comments
sorted by: hot top controversial new old
[–] Kolanaki@pawb.social 1 points 4 hours ago

Things get real crazy when the shareholders are replaced by AI.

[–] Patches@ttrpg.network 9 points 1 day ago* (last edited 1 day ago) (1 children)

All of you are missing the point.

CEOs and The Board are the same people. The majority of CEOs are board members at other companies, and vice-versa. It's a big fucking club and you ain't in it.

Why would they do this to themselves?

Secondly, we already have AI running companies. You think some CEOs and Board Members aren't already using this shit bird as a god? Because they are

[–] frezik@lemmy.blahaj.zone 4 points 1 day ago

They would do it because the big investors--not randos with a 401k in an index fund, but big hedge funds--demand that AI leads the company. This could potentially be forced at a stockholder meeting without the board having much say.

I don't think it will happen en masse for a different reason, though. The real purpose of the CEO isn't to lead the company, but to take the fall when everything goes wrong. Then they get a golden parachute and the company finds someone else. When AI fails, you can "fire" the model, but are you going to want to replace it with a different model? Most likely, the shareholders will reverse course and put a human back in charge. Then they can fire the human again later.

A few high profile companies might go for it. Then it will go badly and nobody else will try.

[–] melsaskca@lemmy.ca 5 points 1 day ago

Wasn't it Willy Shakespeare who said "First, kill all the Shareholders" ? That easily manipulated stock market only truly functions for the wealthy, regardless of harm inflicted on both humans and the environment they exist in.

[–] Iron_Lynx@lemmy.world 17 points 2 days ago (1 children)

> company gets super invested in AI.
> replaces CEO with AI.
> AI does AI stuff, hallucinaties, calls for something inefficient and illegal.
> 4 trillion investor dollars go up in flames.
> company goes under, taking AI hype market down with it

And nothing of value will be lost.

[–] Patches@ttrpg.network 2 points 1 day ago

Except to, you know. All of the people who depended on that company to eat

that's already broadly discussed . there's tons of articles like this one. just use your favorite search engine for "ceos replaced by ai",

[–] fadingembers@lemmy.blahaj.zone 43 points 2 days ago (2 children)

Y'all are all missing the real answer. CEOs have class solidarity with shareholders. Think about about how they all reacted to the death of the United health care CEO. They'll never get rid of them because they're one of them. Rich people all have a keen awareness of class consciousness and have great loyalty to one another.

Us? We're expendable. They want to replace us with machines that can't ask for anything and don't have rights. But they'll never get rid of one of their own. Think about how few CEOs get fired no matter how poor of a job they do.

P.S. Their high pay being because of risk is a myth. Ever heard of a thing called the golden parachute? CEOs never pay for their failures. In fact when they run a company into the ground, they're usually the ones that receive the biggest payouts. Not the employees.

[–] Yezzey@lemmy.ca 9 points 2 days ago (2 children)

Loyalty lasts right up until the math says otherwise.

[–] roundup5381@sh.itjust.works 4 points 2 days ago

One must include social capital in the math

The math has never made sense for CEOs

load more comments (1 replies)
[–] CMDR_Horn@lemmy.world 53 points 2 days ago (2 children)

Several years ago I read an article that went in to great detail on how LLMs are perfectly poised to replace C-levels in corporations. I went on to talk about how they by nature of design essentially do the that exact thing off the bat, take large amounts of data and make strategic decisions based on that data.

I wish I could find it to back this up, but regardless ever since then, I've been waiting for this watershed moment to hit across the board...

[–] Soleos@lemmy.world 31 points 2 days ago (7 children)

They... don't make strategic decisions... That's part of why we hate them no? And we lambast AI proponents because they pretend they do.

[–] turdas@suppo.fi 48 points 2 days ago (5 children)

The funny part is that I can't tell whether you're talking about LLMs or the C-suite.

load more comments (5 replies)
[–] turkalino@lemmy.yachts 4 points 2 days ago (3 children)

They do indeed make strategic decisions, just only in favor of the short term profits of shareholders. It’s “strategy” that a 6 yr old could execute, but strategy nonetheless

load more comments (3 replies)
[–] OboTheHobo@ttrpg.network 3 points 2 days ago

I'd argue they do make strategic decisions, its just that the strategy is always increasing quarterly earnings and their own assets.

load more comments (4 replies)
load more comments (1 replies)
[–] AmidFuror@fedia.io 13 points 2 days ago

Companies never outsourced the CEO position to countries which traditionally have lower CRO salaries but plenty of competency (e.g. Japan), so they won't do this either. It's because CEOs are controlled by boards, and the boards are made up of CEOs from other companies. They have a vested interest in human CEOs with inflated salaries.

[–] WildPalmTree@lemmy.world 4 points 1 day ago (1 children)

Sadly don't think this is going to happen. A good CEO doesn't make calculated decisions based on facts and judge risk against profit. If he did, he would, at best, be a normal CEO. Who wants that? No, a truly great CEO does exactly what a truly bad CEO does; he takes risks that aren't proportional to the reward (and gets lucky)!

This is the only way to beat the game, just like with investments or roulette. There are no rich great roulette players going by the odds. Only lucky.

Sure, with CEOs, this is on the aggregate. I'm sure there is a genius here and a Renaissance man there... But on the whole, best advice is "get risky and get lucky". Try it out. I highly recommend it. No one remembers a loser. And the story continues.

[–] Patches@ttrpg.network 2 points 1 day ago (1 children)

Well you will be happy to hear that AI does make calculated risks but they are not based on reality so they are in fact - risks.

You can't just type "Please do not hallucinate. Do not make judgement calls based on fake news"

[–] WildPalmTree@lemmy.world 1 points 4 hours ago

I'm not sure quite how it relates to what I said. Maybe we are looking at the word risk differently. Let me give an easy example that shows what I think normally is hidden because of complexity.

Five CEOs are faced with the same opportunity to invest heavily in a make or break deal. They either succeed or they go bus, iif they do it. This investment, for one reason or another, only have one winner (because we are simplifying a complex real world problem). All five CEOs invest, four go bust and one wins big. In this simplified example, the one winning CEO would be seen as a great CEO. After all, he did great. The reasonable decision would have been to not invest, but that doesn't make you a great CEO that can move on to better, greener jobs or cash out huge bonuses. No-one remembers the reasonable CEO that made expected gains without unneeded risks.

[–] normalexit@lemmy.world 7 points 2 days ago (3 children)

I could imagine a world where whole virtual organizations could be spun up, and they can just run in the background creating whole products, marketing them, and doing customer support, etc.

Right now the technology doesn't seem there yet, but it has been rapidly improving, so we'll see.

I could definitely see rich CEOs funding the creation of a "celebrity" bot that answers questions the way they do. Maybe with their likeness and voice, so they can keep running companies from beyond the grave. Throw it in one of those humanoid robots and they can keep preaching the company mission until the sun burns out.

What a nightmare.

[–] Patches@ttrpg.network 2 points 1 day ago (1 children)

I could imagine a world where whole virtual organizations could be spun up, and they can just run in the background creating whole products, marketing them, and doing customer support, etc.

Perhaps we could have it sell Paperclips. With the sole goal of selling as many paperclips as possible.

Surely, selling something as innocuous as paperclips could never go wrong.

[–] normalexit@lemmy.world 2 points 1 day ago

Certainly the CEOs will patiently ensure guardrails are in place before chasing a ROI. Right? ... Right?

Uh oh..

I have been having this vision you described for quite some time now.

As time progresses, availability of resources on earth increases because we learn to process and collect them more efficiently; but on the other hand, number of jobs (or, demand for human labor) decreases continuously, because more and more work gets automated.

So, if you'd draw a diagram, it would look something like this:

X-axis is time. As we progress into the future, that completely changes the game. Instead of being a society that is driven by a constant shortage of resources and a constant lack of workers (causing a high demand for workers and a lot of jobs), it'd be a society with a shortage of jobs (and therefore meaningful employment), but with an abundance of resources. What do we do with such a world?

[–] kingprawn@feddit.org 3 points 2 days ago (1 children)

Check out the novel Accelerando by Charles Stross, that thing is part of the plot.

[–] normalexit@lemmy.world 2 points 2 days ago

Thanks for the suggestion, I'll check it out!

[–] jordanlund@lemmy.world 11 points 2 days ago

Should be way easier to replace a CEO. No need for a golden parachute, if the AI fails, you just turn it off.

But I'd imagine right now you have CEOs being paid millions and using an AI themselves. Worst of both worlds.

[–] Bongles@lemmy.zip 9 points 2 days ago (12 children)

AI? Yes probably. Current AI? No. I do think we'll see it happen with an LLM and that company will probably flop. Shit how do you even prompt for that.

load more comments (12 replies)
[–] CanadaPlus@lemmy.sdf.org 3 points 2 days ago* (last edited 1 day ago)

If AI ends up running companies better than people

Okay, important context there. The current AI bubble will burst sooner or later. So, this is hypothetical future AGI.

Yes, if the process of human labour becoming redundant continues uninterrupted, it's highly likely, although since CEOs make their money from the intangible asset of having connections more than the actual work they'll be one of the last to go.

But, it won't continue uninterrupted. We're talking about rapidly transitioning to an entirely different kind of economy, and we should expect it will be similarly destabilising as it was to hunter gatherer societies that suddenly encountered industrial technology.

If humans are still in control, and you still have an entire top 10% of the population with significant equity holdings, there's not going to be much strategy to the initial stages. Front line workers will get laid off catastrophically, basically, and no new work will be forthcoming. The next step will be a political reaction. If some kind of make-work program is what comes out of it, human managers will still find a place in it. If it's basic income, probably not. (And if there's not some kind of restriction on the top end of wealth, as well, you're at risk of creating a new ruling elite with an incentive to kill everyone else off, but that's actually a digression from the question)

When it comes to the longer term, I find inspiration in a blog post I read recently. Capital holdings will eventually become meaningless compared to rights to natural factors. If military logic works at all the same way, and there's ever any kind of war, land will once again be supreme among them. There weren't really CEOs in feudalism, and even if we manage not to regress to autocracy there probably won't be a place for them.

[–] ArgumentativeMonotheist@lemmy.world 7 points 2 days ago (1 children)

No, because someone has to be the company's scapegoat... but if the ridiculous post-truth tendencies of some societies increase, then maybe "AI" will indeed gain "personhood", and in that case, maybe?

load more comments (1 replies)
[–] MITM0@lemmy.world 3 points 2 days ago

Would be cool & funny if they did.

[–] melsaskca@lemmy.ca 3 points 2 days ago (2 children)

That would free up a whole shitload of money for the citizens! /s

load more comments (2 replies)
[–] blarghly@lemmy.world 4 points 2 days ago

If AI ends up running companies better than people, won’t shareholders demand the switch?

Yes. It might be unorthodox at first, but they could just take a vote, and poof, done.

And since corporations are already treated as “people” under the law, replacing a human CEO with an AI isn’t just swapping a worker for a machine, it’s one “person” handing control to another.

Wat?

No. What?

So you just used circular logic to make the AI a "person"... maybe you're saying once it is running the corporation, it is the corporation? But no.

Anyway, corporations are "considered people" in the US under the logic that corporations are, at the end of the day, just collections of people. So you can, say, go to a town hall to voice your opinion as an individual. And you can gather up all your friends to come with you, and form a bloc which advocates for change. You might gain a few more friends, and give your group a name, like "The Otter Defence League." In all these scenarios, you and others are using your right to free speech as a collective unit. Citizens United just says that this logic also applies to corporations.

That means CEOs would eventually have to replace themselve

CEOs wouldn't have to "replace themselves" any more than you have to find a replacement if your manager fires you from Dairy Queen.

[–] flandish@lemmy.world 7 points 2 days ago (2 children)

in all dialectical seriousness, if it appeases the capitalists, it will happen. “first they came with ai for the help desk…” kind of logic here. some sort of confluence of Idiocracy and The Matrix will be the outcome.

load more comments (2 replies)
load more comments
view more: next ›