this post was submitted on 28 Sep 2025
31 points (74.6% liked)

Ask Lemmy

34862 readers
1096 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

Ok, you have a moderately complex math problem you needed to solve. You gave the problem to 6 LLMS all paid versions. All 6 get the same numbers. Would you trust the answer?

(page 2) 22 comments
sorted by: hot top controversial new old
[–] Rhaedas@fedia.io 2 points 1 day ago (1 children)

How trustable the answer is depends on knowing where the answers come from, which is unknowable. If the probability of the answers being generated from the original problem are high because it occurred in many different places in the training data, then maybe it's correct. Or maybe everyone who came up with the answer is wrong in the same way and that's why there is so much correlation. Or perhaps the probability match is simply because lots of math problems tend towards similar answers.

The core issue is that the LLM is not thinking or reasoning about the problem itself, so trusting it with anything is more assuming the likelihood of it being right more than wrong is high. In some areas this is safe to do, in others it's a terrible assumption to make.

[–] Farmdude@lemmy.world -1 points 1 day ago (2 children)

I'm a little confused after listening to a podcast with.... Damn I can't remember his name. He's English. They call him the godfather of AI. A pioneer.

Well, he believes that gpt 2-4 were major breakthroughs in artificial infection. He specifically said chat gpt is intelligent. That some type of reasoning is taking place. The end of humanity could come in a year to 50 years away. If the fella who imagined a Neural net that is mapped using the human brain. And this man says it is doing much more. Who should I listen too?. He didn't say hidden AI. HE SAID CHAT GPT. HONESTLY ON OFFENSE. I JUST DON'T UNDERSTAND THIS EPIC SCENARIO ON ONE SIDE AND TOTALLY NOTHING ON THE OTHER

[–] Rhaedas@fedia.io 2 points 1 day ago (1 children)

One step might be to try and understand the basic principles behind what makes a LLM function. The Youtube channel 3blue1brown has at least one good video on transformers and how they work, and perhaps that will help you understand that "reasoning" is a very broad term that doesn't necessarily mean thinking. What is going on inside a LLM is fascinating and amazing in what does manage to come out that's useful, but like any tool it can't be used for everything well, if at all.

[–] Farmdude@lemmy.world 1 points 1 day ago (1 children)

I'll ask AI what's really going on lolool.

[–] Rhaedas@fedia.io 1 points 1 day ago

Funny, but also not a bad idea, as you can ask it to clarify on things as you go. I just reference that YT channel because he has a great ability to visually show things to help them make sense.

load more comments (1 replies)
[–] msage@programming.dev 1 points 1 day ago
[–] scottmeme@sh.itjust.works 1 points 1 day ago

If you really want to see how good they are, have them do the full calculation for 30! (30 factorial) And see how close it gets to real numbers.

[–] General_Effort@lemmy.world -1 points 1 day ago

Probably, depending on the context. It is possible that all 6 models were trained on the same misleading data, but not very likely in general.

Number crunching isn't an obvious LLM use case, though. Depending on the task, having it create code to crunch the numbers, or a step-by-step tutorial on how to derive the formula, would be my preference.

load more comments
view more: ‹ prev next ›