this post was submitted on 07 Dec 2023
539 points (87.8% liked)

Asklemmy

43984 readers
789 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy ๐Ÿ”

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[โ€“] DahGangalang@infosec.pub 8 points 11 months ago (2 children)

For the record comp sci major here.

So I understand all that but my counter point: can we prove by empirical measure that humans operate in a way that is significantly different? (If there is, I would love to know because I was cornered by a similar talking point when making a similar argument some weeks ago)

[โ€“] Kushia@lemmy.ml 7 points 11 months ago* (last edited 11 months ago) (2 children)

Can you make a logical decision on your own even when you don't have all the facts?

The current version of AI cannot, it makes guesses based on how we've programmed it, just like every other computer program.

[โ€“] pixelscript@lemmy.ml 6 points 11 months ago (1 children)

I fail to see the distinction between "making a logical decision without all the facts" and "make guesses based on how [you've been programmed]". Literally what is the difference?

I'll concede that human intelligence is several orders more powerful, can act upon a wider space of stimuli, and can do it at a fraction of the energy efficiency. That definitely sets it apart. But I disagree that it's the only "true" form of intelligence.

Intelligence is the ability to accumulate new information (i.e. memorize patterns) and apply that information to respond to novel situations. That's exactly what AI does. It is intelligence. Underwhelming intelligence, but nonetheless intelligence. The method of implementation, the input/output space, and the matter of degree are irrelevant.

[โ€“] Kushia@lemmy.ml 4 points 11 months ago* (last edited 11 months ago) (1 children)

It's not just about storage and retrieval of information but also about how (and if) the entity understands the information and can interpret it. This is why an AI still struggles to drive a car because it doesn't actually understand the difference between a small child and a speedbump.

Meanwhile, a simple insect can interpret stimulus information and independently make its own decisions without assistance or having to be pre-programmed by an intelligent being on how to react. An insect can even set its own goals based on that information, like acquiring food or avoiding predators. The insect does all of this because it is intelligent.

In contrast to the insect, an AI like ChatGPT is not anymore intelligent than a calculator, as it relies on an intelligent being to understand the subject and formulate the right stimulus in the first place. Then its result is simply an informed guess at best, there's no understanding like an insect has that it needs to zig zag in a particular way because it wants to avoid getting eaten by predators. Rather, AI as we know it today is really just a very good information retrieval system and not intelligent at all.

[โ€“] pixelscript@lemmy.ml 1 points 11 months ago (1 children)

"Understanding" and "interpretation" are themselves nothing more than emergent properties of advanced pattern recognition.

I find it interesting that you bring up insects as your proof of how they differ from artificial intelligence. To me, they are among nature's most demonstrably clockwork creatures. I find some of their rather predictable "decisions" to some kinds of stimuli to be evidence that they aren't so different from an AI that responds "without thinking".

The way you can tease out a response from ChatGPT by leading it by the nose with very specifically worded prompts, or put it on the spot to hallucinate facts that are untrue is, in my mind, no different than how so-called "intelligent" insects can be stopped in their tracks by a harmless line of Sharpie ink, or be made to death spiral with a faulty pheromone trail, or to thrust themselves into the electrified jaws of a bug zapper. In both cases their inner machinations are fundamentally reactionary and thus exploitable.

Stimulus in, action out. Just needs to pass through some wiring that maps the I/O. Whether that wiring is fleshy or metallic doesn't matter. Any notion of the wiring "thinking" is merely anthropomorphism.

[โ€“] Kushia@lemmy.ml 1 points 11 months ago (1 children)

You said it yourself; you as an intelligent being must tease out whatever response you seek out of CharGPT by providing it with the correct stimuli. An insect operates autonomously, even if in simple or predictable ways. The two are very different ways of responding to stimuli even if the results seem similar.

[โ€“] pixelscript@lemmy.ml 1 points 11 months ago

The only difference you seem to be highlighting here is that an AI like ChatGPT is only active when queried while an insect is "always on". I find this to be an entirely irrelevant detail to the question of whether either one meets criteria of intelligence.

[โ€“] DahGangalang@infosec.pub 3 points 11 months ago* (last edited 11 months ago) (1 children)

I have to say no, I can't.

The best decision I could make is a guess based on the logic I've determined from my own experiences that I would then compare and contrast to the current input.

I will say that "current input" for humans seems to be more broad than what is achievable for AI and the underlying mechanism that lets us assemble our training set (read as: past experiences) into useful and usable models appears to be more robust than current tech, but to the best of my ability to explain it, this appears to be a comparable operation to what is happening with the current iterations of LLM/AI.

Ninjaedit: spelling

[โ€“] Kushia@lemmy.ml 4 points 11 months ago* (last edited 11 months ago) (1 children)

If you can't make logical decisions then how are you a comp sci major?

Seriously though, the point is that when making decisions you as a human understand a lot of the ramifications of them and can use your own logic to make the best decision you can. You are able to make much more flexible decisions and exercise caution when you're unsure. This is actual intelligence at work.

A language processing system has to have it's prompt framed in the right way, it has to have knowledge in its database about it and it only responds in a way that it's programmed to do so. It doesn't understand the ramifications of what it puts out.

The two "systems" are vastly different in both their capabilities and output. Even in image processing AI absolutely sucks at driving a car for instance, whereas most humans can do it safely with little thought.

[โ€“] DahGangalang@infosec.pub 3 points 11 months ago (1 children)

and exercise caution when you're unsure

I don't think that fully encapsulates a counter point, but I think that has the beginnings of a solid counter point to the argument I've laid out above (again, it's not one I actually devised, just one that really put me on my heels).

The ability to recognize when it's out of its depth does not appear to be something modern "AI" can handle.

As I chew on it, I can't help but wonder what it would take to have AI recognize that. It doesn't feel like it should be difficult to have a series of nodes along the information processing matrix to track "confidence levels". Though, I suppose that's kind of what is happening when the creators of these projects try to keep their projects from processing controversial topics. It's my understanding those instances act as something of a short circuit where (if you will) when confidence "that I'm allowed to walk about this" drops below a certain level, the AI will spit out a canned response vs actually attempting to process input against the model.

The above is intended ad more a brain dump than a coherent argument. You've given me something to chew on, and for that I thank you!

[โ€“] Kushia@lemmy.ml 3 points 11 months ago

Well, it's an online forum and I'm responding while getting dressed and traveling to an appointment, so concise responses is what you're gonna get. In a way it's interesting that I can multitask all of these complex tasks reasonably effortlessly, something else an existing AI cannot do.

[โ€“] Wheaties@hexbear.net 2 points 11 months ago (1 children)

You are ~30 trillion cells all operating concurrently with one another. Are you suggesting that is in any way similar to a Turing machine?

[โ€“] DahGangalang@infosec.pub 1 points 11 months ago (1 children)

Yes? I think that depends on your specific definition and requirements of a turing machine, but I think it's fair to compare the almagomation of cells that is me to the "AI" LLM programs of today.

While I do think that the complexity of input, output, and "memory" of LLM AI's is limited in current iterations (and thus makes it feel like a far comparison to "human" intelligence), I do think the underlying process is fundamentally comparable.

The things that make me "intelligent" are just a robust set of memories, lessons, and habits that allow me to assimilate new information and experiences in a way that makes sense to (most of) the people around me. (This is abstracting away that this process is largely governed by chemical reactions, but considering consciousness appears to be just a particularly complicated chemistry problem reinforces the point I'm trying to make, I think).

[โ€“] Wheaties@hexbear.net 0 points 11 months ago (1 children)

My definition of a Turing machine? I'm not sure you know what Turing machines are. It's a general purpose computer, described in principle. And, in principle, a computer can only carry out one task at a time. Modern computers are fast, they may have several CPUs stitched together and operating in tandem, but they are still fundamentally limited by this. Bodies don't work like that. Every part of them is constantly reacting to it's environment and it's neighboring cells - concurrently.

You are essentially saying, "Well, the hardware of the human body is very complex, and this software is(n't quite as) complex; so the same sort of phenomenon must be taking place." That's absurd. You're making a lopsided comparison between two very different physical systems. Why should the machine we built for doing sums just so happen to reproduce a phenomena we still don't fully understand?

[โ€“] DahGangalang@infosec.pub 1 points 11 months ago* (last edited 11 months ago) (1 children)

Thats not what I intended to communicate.

I feel the Turing machine portion is not particularly relevant to the larger point. Not to belabor the point, but to be as clear as I can be: I don't think nor intend to communicate that humans operate in the same way as a computer; I don't mean to say that we have a CPU that handles instructions in a (more or less) one at a time fashion with specific arguments that determine flow of data as a computer would do with Assembly Instructions. I agree that anyone arguing human brains work like that are missing a lot in both neuroscience and computer science.

The part I mean to focus on is the models of how AIs learn, specifically in neutral networks. There might be some merit in likening a cell to a transistor/switch/logic gate for some analogies, but for the purposes of talking about AI, I think comparing a brain cell to a node in a neutral network is most useful.

The individual nodes in neutral network will have minimal impact on converting input to output, yet each one does influence the processing of one to the other. Iand with the way we train AI, how each node tweaks the result will depend solely on the past I put that has been given to it.

In the same way, when met with a situation, our brains will process information in a comparable way: that is, any given input will be processed by a practically uncountable amount of neurons, each influencing our reactions (emotional, physical, chemical, etc) in miniscule ways based on how our past experiences have "treated" those individual neurons.

In that way, I would argue that the processes by which AI are trained and operated are comparable to that of the human mind, though they do seem to lack complexity.

Ninjaedit: I should proofread my post before submitting it.

[โ€“] Wheaties@hexbear.net 2 points 11 months ago (1 children)

I agree that there are similarities in how groups of nerve cells process information and how neural networks are trained, but I'm hesitant to say that's a whole picture of the human mind. Modern anesthesiology suggests microtubuals, structures within cells, also play a function in cognition.

[โ€“] DahGangalang@infosec.pub 2 points 11 months ago

Right.

I don't mean to say that the mechanism by which human brains learn and the mechanism by which AI is trained are 1:1 directly comparable.

I do mean to say that the process looks pretty similar.

My knee jerk reaction is to analogize it as comparing a fish swimming to a bird flying. Sure there are some important distinctions (e.g. bird's need to generate lift while fish can rely on buoyancy) but in general, the two do look pretty similar (i.e. they both take a fluid medium and push it to generate thrust).

And so with that, it feels fair to say that learning, that the storage and retrieval of memories/experiences, and that the way that that stored information shapes our sub-concious (and probably conscious too) reactions to the world around us seems largely comparable to the processes that underlie the training of "AI" and LLMs.