this post was submitted on 15 Nov 2024
1257 points (99.5% liked)

Science Memes

11448 readers
948 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS
 

you are viewing a single comment's thread
view the rest of the comments
[–] milicent_bystandr@lemm.ee 53 points 1 month ago (2 children)

Not advanced maths per se; neural networks are amazing! Fuzzy matching based on experience - taken to an incredible level. And, tuneable by internal simulation (imagination).

[–] HereIAm@lemmy.world 23 points 1 month ago (3 children)

Don't be fooled to think computer neural networks is how the brain is structured. Through out history we've always compared the brain to the most advanced technology at the time. From clocks, to computers with short and long term memory, and now to neural networks.

[–] milicent_bystandr@lemm.ee 11 points 1 month ago

That is a good point, though the architecture of computer neutral networks is inspired by how we think the brain works, and if I understand correctly there is some definite similarity in the architecture.

Lots of difference though, still!

[–] Zementid@feddit.nl 7 points 1 month ago* (last edited 1 month ago) (1 children)

I would guess that every statement made is kind of true. It is a clock, a computer and a LLM,...

I would even go as far as LLM is the closest to a functioning brain we can produce from a functional perspective. And even the artificial brains are to complex to understand in detail.

[–] milicent_bystandr@lemm.ee 4 points 1 month ago (1 children)

I reckon we can get a lot closer than an LLM in time. For one thing, the mind has particular understanding of interim steps whereas, as I understand it, the LLM has no real concept of meaning between the inputs and the output. Some of this interim is, I think, an important part of how we assess truthfulness of generated ideas before we put them into words.

[–] Zementid@feddit.nl 1 points 1 month ago (2 children)

I experimented with rules like : "Summarize everything of our discussion into one text you can use as memory below your answer." And "summarize and remove unnecessary info from this text, if contradictions occur act curious to solve them"... simply to mimic a short term memory.

It kind of worked better for problem solving but it ate tokens like crazy and the answers took longer and longer. The current GPT4 models seem to do something similar in the background.

[–] milicent_bystandr@lemm.ee 1 points 1 month ago

I think that's still different from what I'm thinking of of interim steps, though.

...but as I think how to explain I realize I'm about to blather about things I don't understand, or at least haven't had time to think about! So I'd better leave it there!

[–] Infomatics90@lemmy.ca 1 points 1 month ago

I would really like to get into LLM and AI development but the math.....woosh right over my head.

[–] Umbrias@beehaw.org 2 points 2 weeks ago (1 children)

there is certainly math going on in the brain at various levels, both equivalent models and identical sorts of calculations, it's not just fuzzy matching.

[–] milicent_bystandr@lemm.ee 1 points 2 weeks ago (1 children)

But probably not calculating trigonometry and calculus when juggling, right?

[–] Umbrias@beehaw.org 2 points 2 weeks ago

almost certainly doing those things and more (especially lin alg and diffeq solutions, and who knows what equivalent mathematical representations). Why wouldn't it? even stereotyped, there are subtle feedback variations you need to account for.