this post was submitted on 08 Jun 2025
799 points (95.7% liked)

Technology

71143 readers
3134 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

LOOK MAA I AM ON FRONT PAGE

(page 3) 50 comments
sorted by: hot top controversial new old
[–] Nanook@lemm.ee 228 points 1 day ago (51 children)

lol is this news? I mean we call it AI, but it’s just LLM and variants it doesn’t think.

[–] Clent@lemmy.dbzer0.com 19 points 1 day ago (2 children)

Proving it matters. Science is constantly proving any other thing that people believe is obvious because people have an uncanning ability to believe things that are false. Some people will believe things long after science has proven them false.

load more comments (2 replies)
[–] MNByChoice@midwest.social 77 points 1 day ago (1 children)

The "Apple" part. CEOs only care what companies say.

[–] kadup@lemmy.world 51 points 1 day ago (5 children)

Apple is significantly behind and arrived late to the whole AI hype, so of course it's in their absolute best interest to keep showing how LLMs aren't special or amazingly revolutionary.

They're not wrong, but the motivation is also pretty clear.

[–] Venator@lemmy.nz 6 points 1 day ago

Apple always arrives late to any new tech, doesn't mean they haven't been working on it behind the scenes for just as long though...

[–] homesweethomeMrL@lemmy.world 29 points 1 day ago

“Late to the hype” is actually a good thing. Gen AI is a scam wrapped in idiocy wrapped in a joke. That Apple is slow to ape the idiocy of microsoft is just fine.

load more comments (3 replies)
load more comments (49 replies)
[–] Xatolos@reddthat.com 4 points 23 hours ago (1 children)

So, what your saying here is that the A in AI actually stands for artificial, and it's not really intelligent and reasoning.

Huh.

[–] coolmojo@lemmy.world 1 points 17 hours ago

The AI stands for Actually Indians /s

[–] GaMEChld@lemmy.world 20 points 1 day ago (10 children)

Most humans don't reason. They just parrot shit too. The design is very human.

[–] joel_feila@lemmy.world 7 points 1 day ago

Thata why ceo love them. When your job is 90% spewing bs a machine that does that is impressive

load more comments (9 replies)
[–] Auli@lemmy.ca 13 points 1 day ago

No shit. This isn't new.

[–] Jhex@lemmy.world 49 points 1 day ago (1 children)

this is so Apple, claiming to invent or discover something "first" 3 years later than the rest of the market

load more comments (1 replies)
[–] bjoern_tantau@swg-empire.de 36 points 1 day ago
[–] vala@lemmy.world 24 points 1 day ago
[–] brsrklf@jlai.lu 45 points 1 day ago (2 children)

You know, despite not really believing LLM "intelligence" works anywhere like real intelligence, I kind of thought maybe being good at recognizing patterns was a way to emulate it to a point...

But that study seems to prove they're still not even good at that. At first I was wondering how hard the puzzles must have been, and then there's a bit about LLM finishing 100 move towers of Hanoï (on which they were trained) and failing 4 move river crossings. Logically, those problems are very similar... Also, failing to apply a step-by-step solution they were given.

[–] auraithx@lemmy.dbzer0.com 38 points 1 day ago

This paper doesn’t prove that LLMs aren’t good at pattern recognition, it demonstrates the limits of what pattern recognition alone can achieve, especially for compositional, symbolic reasoning.

load more comments (1 replies)
[–] sev@nullterra.org 49 points 1 day ago (35 children)

Just fancy Markov chains with the ability to link bigger and bigger token sets. It can only ever kick off processing as a response and can never initiate any line of reasoning. This, along with the fact that its working set of data can never be updated moment-to-moment, means that it would be a physical impossibility for any LLM to achieve any real "reasoning" processes.

load more comments (35 replies)
load more comments
view more: ‹ prev next ›