this post was submitted on 14 Aug 2025
805 points (98.6% liked)

Technology

74382 readers
2902 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] redsunrise@programming.dev 305 points 1 week ago (7 children)

Obviously it's higher. If it was any lower, they would've made a huge announcement out of it to prove they're better than the competition.

[–] Ugurcan@lemmy.world 32 points 1 week ago* (last edited 1 week ago) (3 children)

I’m thinking otherwise. I think GPT5 is a much smaller model - with some fallback to previous models if required.

Since it’s running on the exact same hardware with a mostly similar algorithm, using less energy would directly mean it’s a “less intense” model, which translates into an inferior quality in American Investor Language (AIL).

And 2025’s investors doesn’t give a flying fuck about energy efficiency.

[–] PostaL@lemmy.world 27 points 1 week ago (2 children)

And they don't want to disclose the energy efficiency becaaaause ... ?

[–] AnarchistArtificer@slrpnk.net 12 points 1 week ago

Because the AI industry is a bubble that exists to sell more GPUs and drive fossil fuel demand

load more comments (1 replies)
[–] RobotZap10000@feddit.nl 20 points 1 week ago* (last edited 1 week ago) (1 children)

They probably wouldn't really care how efficient it is, but they certainly would care that the costs are lower.

[–] Ugurcan@lemmy.world 7 points 1 week ago (2 children)

I’m almost sure they’re keeping that for the Earnings call.

load more comments (2 replies)
load more comments (1 replies)
[–] ChaoticEntropy@feddit.uk 23 points 1 week ago

I get the distinct impression that most of the focus for GPT5 was making it easier to divert their overflowing volume of queries to less expensive routes.

load more comments (5 replies)
[–] dinckelman@lemmy.world 73 points 1 week ago

Duh. Every company like this "suddenly" starts withholding public progress reports, once their progress fucking goes downhill. Stop giving these parasites handouts

[–] aeronmelon@lemmy.world 67 points 1 week ago (1 children)

Sam Altman looks like an SNL actor impersonating Sam Altman.

[–] ChaoticEntropy@feddit.uk 8 points 1 week ago* (last edited 1 week ago)

"Herr derr, AI. No, seriously."

[–] Saledovil@sh.itjust.works 61 points 1 week ago (1 children)

It's safe to assume that any metric they don't disclose is quite damning to them. Plus, these guys don't really care about the environmental impact, or what us tree-hugging environmentalists think. I'm assuming the only group they are scared of upsetting right now is investors. The thing is, even if you don't care about the environment, the problem with LLMs is how poorly they scale.

An important concept when evaluating how something scales is are marginal values, chiefly marginal utility and marginal expenses. Marginal utility is how much utility do you get if you get one more unit of whatever. Marginal expenses is how much it costs to get one more unit. And what the LLMs produce is the probably that a token, T, follows on prefix Q. So P(T|Q) (read: Probably of T, given Q). This is done for all known tokens, and then based on these probabilities, one token is chosen at random. This token is then appended to the prefix, and the process repeats, until the LLM produces a sequence which indicates that it's done talking.

If we now imagine the best possible LLM, then the calculated value for P(T|Q) would be the actual value. However, it's worth noting that this already displays a limitation of LLMs. Namely even if we use this ideal LLM, we're just a few bad dice rolls away from saying something dumb, which then pollutes the context. And the larger we make the LLM, the closer its results get to the actual value. A potential way to measure this precision would be by subtracting P(T|Q) from P_calc(T|Q), and counting the leading zeroes, essentially counting the number of digits we got right. Now, the thing is that each additional digit only provides a tenth of the utility to than the digit before it. While the cost for additional digits goes up exponentially.

So, exponentially decaying marginal utility meets exponentially growing marginal expenses. Which is really bad for companies that try to market LLMs.

[–] Jeremyward@lemmy.world 14 points 1 week ago (6 children)

Well I mean also that they kinda suck, I feel like I spend more time debugging AI code than I get working code.

[–] SkunkWorkz@lemmy.world 12 points 1 week ago (1 children)

I only use it if I’m stuck even if the AI code is wrong it often pushes me in the right direction to find the correct solution for my problem. Like pair programming but a bit shitty.

The best way to use these LLMs with coding is to never use the generated code directly and atomize your problem into smaller questions you ask to the LLM.

[–] fibojoly@sh.itjust.works 8 points 1 week ago (1 children)

So duck programming right?

load more comments (1 replies)
load more comments (5 replies)
[–] daveB@sh.itjust.works 45 points 1 week ago (3 children)
load more comments (3 replies)
[–] fuzzywombat@lemmy.world 44 points 1 week ago (3 children)

Sam Altman has gone into PR and hype overdrive lately. He is practically everywhere trying to distract the media from seeing the truth about LLM. GPT-5 has basically proved that we've hit a wall and the belief that LLM will just scale linearly with amount of training data is false. He knows AI bubble is bursting and he is scared.

[–] Saledovil@sh.itjust.works 12 points 1 week ago

He's also already admitted that they're out of training data. If you've wondered why a lot more websites will run some sort of verification when you connect, it's because there's a desperate scramble to get more training data.

[–] rozodru@lemmy.world 8 points 1 week ago (2 children)

Bingo. If you routinely use LLM's/AI you've recently seen it first hand. ALL of them have become noticeably worse over the past few months. Even if simply using it as a basic tool, it's worse. Claude for all the praise it receives has also gotten worse. I've noticed it starting to forget context or constantly contradicting itself. even Claude Code.

The release of GPT5 is proof in the pudding that a wall has been hit and the bubble is bursting. There's nothing left to train on and all the LLM's have been consuming each others waste as a result. I've talked about it on here several times already due to my work but companies are also seeing this. They're scrambling to undo the fuck up of using AI to build their stuff, None of what they used it to build scales. None of it. And you go on Linkedin and see all the techbros desperately trying to hype the mounds of shit that remain.

I don't know what's next for AI but this current generation of it is dying. It didn't work.

load more comments (2 replies)
[–] Tollana1234567@lemmy.today 8 points 1 week ago

MS already released, thier AI doesnt make money at all, in fact its costing too much. of course hes freaking out.

[–] kescusay@lemmy.world 42 points 1 week ago (4 children)

I have to test it with Copilot for work. So far, in my experience its "enhanced capabilities" mostly involve doing things I didn't ask it to do extremely quickly. For example, it massively fucked up the CSS in an experimental project when I instructed it to extract a React element into its own file.

That's literally all I wanted it to do, yet it took it upon itself to make all sorts of changes to styling for the entire application. I ended up reverting all of its changes and extracting the element myself.

Suffice to say, I will not be recommending GPT 5 going forward.

[–] Sanguine@lemmy.dbzer0.com 27 points 1 week ago (1 children)

Sounds like you forgot to instruct it to do a good job.

[–] Dindonmasker@sh.itjust.works 10 points 1 week ago (2 children)

"If you do anything else then what i asked your mother dies"

[–] elvith@feddit.org 10 points 1 week ago (1 children)

"Beware: Another AI is watching every of your steps. If you do anything more or different than what I asked you to or touch any files besides the ones listed here, it will immediately shutdown and deprovision your servers."

load more comments (1 replies)
load more comments (1 replies)
[–] GenChadT@programming.dev 19 points 1 week ago (11 children)

That's my problem with "AI" in general. It's seemingly impossible to "engineer" a complete piece of software when using LLMs in any capacity that isn't editing a line or two inside singular functions. Too many times I've asked GPT/Gemini to make a small change to a file and had to revert the request because it'd take it upon itself to re-engineer the architecture of my entire application.

load more comments (11 replies)
[–] Squizzy@lemmy.world 15 points 1 week ago

We moved to m365 and were encouraged to try new elements. I gave copilot an excel sheet, told it to add 5% to each percent in column B and not to go over 100%. It spat out jumbled up data all reading 6000%.

load more comments (1 replies)
[–] SGforce@lemmy.ca 32 points 1 week ago

It's the same tech. It would have to be bigger or chew through "reasoning" tokens to beat benchmarks. So yeah, of course it is.

[–] ZILtoid1991@lemmy.world 27 points 1 week ago (2 children)

When will genAI be so good, it'll solve its own energy crisis?

[–] xthexder@l.sw0.com 13 points 1 week ago

Most certainly it won't happen until after AI has developed a self-preservation bias. It's too bad the solution is turning off the AI.

load more comments (1 replies)
[–] Transtronaut@lemmy.blahaj.zone 24 points 1 week ago

If anyone has ever wondered what it would look like if tech giants went all in on "brute force" programming, this is it. This is what it looks like.

[–] Tollana1234567@lemmy.today 24 points 1 week ago (7 children)

intense electricity demands, and WATER for cooling.

load more comments (7 replies)
[–] kalleboo@lemmy.world 20 points 1 week ago

They literally don't know. "GPT-5" is several models, with a model gating in front to choose which model to use depending on how "hard" it thinks the question is. They've already been tweaking the front-end to change how it cuts over. They've definitely going to keep changing it.

[–] homesweethomeMrL@lemmy.world 20 points 1 week ago (2 children)

Photographer1: Sam, could you give us a goofier face?

*click* *click*

Photographer2: Goofier!!

*click* *click* *click* *click*

[–] nialv7@lemmy.world 7 points 1 week ago

Looks like he's going to eat his microphone

[–] cenzorrll@piefed.ca 7 points 1 week ago

He looks like someone in a cult. Wide open eyes, thousand yard stare, not mentally in the same universe as the rest of the world.

[–] cecilkorik@lemmy.ca 12 points 1 week ago (2 children)

So like, is this whole AI bubble being funded directly by the fossil fuel industry or something? Because the AI training and the instantaneous global adoption of them is using energy like it's going out of style. Which fossil fuels actually are (going out of style, and being used to power these data centers). Could there be a link? Gotta find a way to burn all the rest of the oil and gas we can get out of the ground before laws make it illegal. Makes sense, in their traditional who gives a fuck about the climate and environment sort of way, doesn't it?

[–] BillyTheKid@lemmy.ca 7 points 1 week ago (5 children)

I mean, AI is using like 1-2% of human energy and that's fucking wild.

My take away is we need more clean energy generation. Good things we've got countries like China leading the way in nuclear and renewables!!

[–] cecilkorik@lemmy.ca 6 points 1 week ago

All I know is that I'm getting real tired of this Matrix / Idiocracy Mash-up Movie we're living in.

load more comments (4 replies)
load more comments (1 replies)
[–] TheObviousSolution@lemmy.ca 11 points 1 week ago

When you want to create the shiniest honeypot, you need high power consumption.

[–] vrighter@discuss.tchncs.de 10 points 1 week ago (3 children)

is there any picture of the guy without his hand up like that?

load more comments (3 replies)
load more comments
view more: next ›