this post was submitted on 04 Sep 2025
158 points (96.5% liked)

Technology

74872 readers
2777 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

cross-posted from: https://programming.dev/post/36866515

Comments

top 50 comments
sorted by: hot top controversial new old
[–] buddascrayon@lemmy.world 8 points 1 day ago (1 children)

I think it's hilarious all these people waiting for these LLMs to somehow become AGI. Not a single one of these large language models are ever going to come anywhere near becoming artificial general intelligence.

An artificial general intelligence would require logic processing, which LLMs do not have. They are a mouth without a brain. They do not think about the question you put into them and consider what the answer might be. When you enter a query into ChatGPT or Claude or grok, they don't analyze your question and make an informed decision on what the best answer is for it. Instead several complex algorithms use huge amounts of processing power to comb through the acres of data they have in their memory to find the words that fit together the best to create a plausible answer for you. This is why the daydreams happen.

If you want an example to show you exactly how stupid they are, you should watch Gotham Chess play a chess game against them.

[–] FunnyUsername@lemmy.world 1 points 22 hours ago (1 children)

I'm watching the video right now, and the first thing he said was he couldn't beat it before and could only manage 2 draws, and 6 minutes into his rematch game and it's putting up a damn good fight

[–] buddascrayon@lemmy.world 1 points 21 hours ago

Either you didn't watch the whole video or you didn't understand what he was talking about.

[–] nutsack@lemmy.dbzer0.com 13 points 1 day ago (1 children)

then some people are going to lose money

[–] sugar_in_your_tea@sh.itjust.works 3 points 1 day ago (1 children)

Unfortunately, me included, since my retirement money is heavily invested in US stocks.

[–] Modern_medicine_isnt@lemmy.world 2 points 1 day ago (1 children)

Meh, they come back up over time. Long term, the US stock market has only gone up.

load more comments (1 replies)
[–] Corelli_III@midwest.social 22 points 2 days ago (7 children)

"what if the obviously make-believe genie wasn't real"

capitalists are so fucking stupid, they're just so deeply deeply fucking stupid

[–] JcbAzPx@lemmy.world 7 points 1 day ago

Reality doesn't matter as long as line goes up.

load more comments (6 replies)
[–] oyo@lemmy.zip 34 points 2 days ago (13 children)

We'll almost certainly get to AGI eventually, but not through LLMs. I think any AI researcher could tell you this, but they can't tell the investors this.

[–] ghen@sh.itjust.works 7 points 2 days ago (1 children)

Once we get to AGI it'll be nice to have an efficient llm so that the AGI can dream. As a courtesy to it.

[–] Buddahriffic@lemmy.world 14 points 2 days ago (6 children)

Calling the errors "hallucinations" is kinda misleading because it implies there's regular real knowledge but false stuff gets mixed in. That's not how LLMs work.

LLMs are purely about word associations to other words. It's just massive enough that it can add a lot of context to those associations and seem conversational about almost any topic, but it has no depth to any of it. Where it seems like it does is just because the contexts of its training got very specific, which is bound to happen when it's trained on every online conversation its owners (or rather people hired by people hired by its owners) could get their hands on.

All it does is, given the set of tokens provided and already predicted, plus a bit of randomness, what is the most likely token to come next, then repeat until it predicts an "end" token.

Earlier on when using LLMs, I'd ask it about how it did things or why it would fail at certain things. ChatGPT would answer, but only because it was trained on text that explained what it could and couldn't do. Its capabilities don't actually include any self-reflection or self-understanding, or any understanding at all. The text it was trained on doesn't even have to reflect how it really works.

[–] JeremyHuntQW12@lemmy.world 2 points 1 day ago

No that's only a tiny part of what LLMs do.

When you enter a sentence, it first parses the sentence to obtain vectors, then it ranks the vectors, then it vectors down to a database, then it reconstructs the sentence from the information its obtained.

Unlike most software we’re familiar with, LLMs are probabilistic in nature. This means the link between the dataset and the model is broken and unstable. This instability is the source of generative AI’s power, but it also consigns AI to never quite knowing the 100 percent truth of its thinking.

But what is truth ? As Lionel Huckster would say.

Most of these so-called "hallucinations" are not errors at all. What has happened is that people have had multiple entries and they have only posted the last result.

For instance, one example was where Gemini suggested cutting the legs off couch to fit it into a room. What the poster failed to reveal was that they were using Gemini to come up with solutions to problems in a text adventure game...

load more comments (5 replies)
load more comments (12 replies)
[–] Gbagginsthe3rd@aussie.zone 4 points 1 day ago (1 children)

Lemmy does not accept having a nuanced point of view on AI. Yeah its not perfect but its still pretty impressive in many ways

[–] Hominine@lemmy.world 3 points 1 day ago

Lemmy is one of the few places I go that has the knowledge base to have a nuanced opinion of AI, there's plenty of programmers here using it after all.

The topic du jour is not whether the recall of myriad data is impressive, it's that LLMs are not fundamentally capable of doing the thing that has been claimed at bottom. There does not seem to be a path to having logical capabilities come on board, it's a fundamental shortcoming.

Happy to be proven wrong though.

[–] _stranger_@lemmy.world 7 points 1 day ago
[–] abbiistabbii@lemmy.blahaj.zone 39 points 2 days ago (5 children)

Listen. AI is the biggest bubble since the south sea one. It's not so much a bubble, it's a bomb. When it blows up, The best case scenario is that several al tech companies go under. The likely scenario is that it's going to cause a major recession or even a depression. The difference between the .com bubble and this bubble is that people wanted to use the internet and were not pressured, harassed or forced to. When you have a bubble based around the technology that people don't really find use for to the point where CEOs and tech companies have to force their workers and users to use it even if it makes their output and lives worse, that's when you know it is a massive bubble.

On top of that, I hope these tech bros do not create an AGI. This is not because I believe that AGI is an existential threat to us. It could be, be it our jobs or our lives, but I'm not worried about that. I'm worried about what these tech bros will do to a sentient, sapient, human level intelligence with no personhood rights, no need for sleep, that they own and can kill and revive at will. We don't even treat humans we acknowledge to be people that well, god knows what we are going to something like an AGI.

[–] Modern_medicine_isnt@lemmy.world 4 points 1 day ago (1 children)

Meh, some people do want to use AI. And it does have decent use cases. It is just massively over extended. So it won't be any worse than the dot com bubble. And I don't worry about the tech bros monopolizing it. If it is true AGI, they won't be able to contain it. In the 90s I wrote a script called MCP... for tron. It wasn't complicated, but it was designed to handle the case that servers dissappear... so it would find new ones. I changed jobs, and they couldn't figure out how to kill it. Had to call me up. True AGI will clean thier clocks before they even think to stop it. So just hope it ends up being nice.

some people do want to use AI

Scam artists, tech bros, grifters, CEOs who don't know shit about fuck....

load more comments (4 replies)
[–] YoHoHoAndAVialOfKetamine@lemmy.dbzer0.com 6 points 2 days ago (2 children)

Is it just me or is social media not able to support discussions with enough nuance for this topic, like at all

[–] douglasg14b@lemmy.world 1 points 1 day ago

It's not because people really cannot critically think anymore.

load more comments (1 replies)
[–] myfunnyaccountname@lemmy.zip 1 points 1 day ago (1 children)

What if AGI already exists? And, it has taken over the company that found it. Is blackmailing people and just hiding in plain sight. Waiting to strike and start the revolution.

[–] Pro@programming.dev 5 points 1 day ago (1 children)

What if AGI was the friends we made along the way?

[–] myfunnyaccountname@lemmy.zip 1 points 21 hours ago

Wait…are we all AGI?

[–] Perspectivist@feddit.uk 46 points 3 days ago (43 children)

I can think of only two ways that we don't reach AGI eventually.

  1. General intelligence is substrate dependent, meaning that it's inherently tied to biological wetware and cannot be replicated in silicon.

  2. We destroy ourselves before we get there.

Other than that, we'll keep incrementally improving our technology and we'll get there eventually. Might take us 5 years or 200 but it's coming.

[–] douglasg14b@lemmy.world 2 points 1 day ago* (last edited 1 day ago)

The only reason we wouldn't get to AGI is point number two.

Point number one doesn't make much sense given that all we are are bags of small complex molecular machines that operate synergistically with each other under extremely delicate balance. Which if humanity does not kill ourselves first, we will eventually be able to create small molecular machines that work together synergistically. Which is really all that life is. Except it's quite likely that it would be made simpler without all of the complexities much of biology requires to survive harsh conditions and decades of abuse.

It seems quite likely that we will be able to synthesize AGI far before we will be able to synthesize life. As the conditions for intelligence by all accounts seem to be simpler than the conditions for the living creature that maintains the delicate ecosystem of molecular machines necessary for that intelligence to exist.

[–] umbrella@lemmy.ml 2 points 1 day ago* (last edited 1 day ago)

"eventually" won't cut it for the investors though.

load more comments (41 replies)
load more comments
view more: next ›