this post was submitted on 06 Sep 2025
29 points (96.8% liked)

Asklemmy

50304 readers
796 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy πŸ”

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 6 years ago
MODERATORS
top 29 comments
sorted by: hot top controversial new old
[–] liquefy4931@lemmy.world 11 points 2 days ago

After I learned how LLMs function, the "AI" we use in reality was categorized within my mind as something entirely new and different from the fictional, cognizant, sapient artificial intelligence in my favorite novels.

We calculated in the 70s that the algorithm on which the LLMs run will only get us so far. We’ve nearly reached that point. related article that basically covers it all: https://venturebeat.com/ai/llms-are-stuck-on-a-problem-from-the-70s-but-are-still-worth-using-heres-why

So basically no different view. Still waiting for my cyborg buddy.

[–] lime@feddit.nu 27 points 3 days ago

not at all, just as how boston dynamics' atlas didn't change how i viewed robocop.

text generators just have very little in common with intelligent, autonomous artificial entities.

[–] tehn00bi@lemmy.world 1 points 2 days ago

I keep thinking our AI will lead us to something like the Eloi of the Time Machine, and the Morlocks will be the machines that run everything.

[–] diptchip@lemmy.world 2 points 2 days ago

Hate is a strong word… I feel like humans and machines coexist a little too well in the movies, except when the lack of coexistence IS the plot.

[–] Nemo@slrpnk.net 6 points 3 days ago

No, but actually studying Artificial Intelligence a decade ago in college did.

We had language models back then, too, they just weren't as good.

[–] NKBTN@feddit.uk 6 points 3 days ago (1 children)

They've made fictional AI seem that much more far-fetched.

Obviously, we all learn by imitation and instruction - but LLMs have shown that's only part of the puzzle

[–] yogthos@lemmy.ml 1 points 2 days ago (1 children)

I think LLMs could provide a human friendly interface for robots. There's a lot of interesting work happening with embodied AI now, and in my opinion embodiment is the key ingredient for making AI intelligent in a human sense. A robot has to interact with the environment and it builds an internal model of the world for making decisions. This creates a feedback loop where the robot can learn the rules of the world and do meaningful interaction, and that's precisely what's missing with LLMs.

[–] Achyu@lemmy.sdf.org 1 points 2 days ago (1 children)

So an LLM with realtime learning/updation?

[–] yogthos@lemmy.ml 1 points 2 days ago

Not necessarily just an LLM on its own. The key part is that the internal model is coupled with reinforcement learning where it becomes rooted in the behaviors of the physical world. Real time continuous learning is the way to get there, but it can be done using different approaches. For example, neurosymbolic AI combines deep neural networks with symbolic logic. The LLM is used to parse and classify noisy input data, while a logic engine is used to make decisions about it. My expectation is that we'll see more of these types of approaches where different machine learning techniques are combined together going forward. LLMs will just be one part of the bigger whole.

[–] MourningDove@lemmy.zip 4 points 2 days ago

It is not now, nor ever will be anything like the way it’s depicted in sci-fi fantasy. We are never going to achieve anything close to a Star Trek-level of symbiosis with tech. Everything we ever do will be weaponized, and what can’t be turned on our adversaries and ultimately ourselves, will be used to make the less intelligent even more-so.

It’s going to drain our last vestige of creativity as it runs headlong through our every culture, and in its wake will be the unmotivated remains of what passion for the arts we once had, until one day- we will be nothing more than animals walking in and out of rooms.

Trust that nothing good lies that way.

[–] m532@lemmygrad.ml 3 points 3 days ago

I noticed that authors are mostly completely wrong about everything. They can't write machines. They can't write animals either. And of course they can't write aliens. They can only write humans and then use that for "the machine has feelings" bs. Those things in the stories are not machines, they are badly written humans.

[–] tias@discuss.tchncs.de 4 points 3 days ago

In sci-fi, AI devices (like self-driving cars or ships, or androids) seem like an integrated unit where any controls or sensors they have are like human limbs and senses. The AI "wills" the engine to start. I always imagined AI would be like a single organism where neurons are connected directly to the body.

Given the development of LLM:s and how they are used, it now seems more likely that AI will be an additional "smart layer" on top of the dumb machinery, and actions are performed by emitting tokens/commands ("raise arm 35 degrees") that are sent to API:s. The interaction will be indirect in the way that we control the TV with the remote.

[–] FRYD@sh.itjust.works 3 points 3 days ago* (last edited 3 days ago)

AI in fiction is a boring concept to me. It’s presented either as β€œWhat is a person?” or β€œWhat if we create an evil god?”. To me anything with feelings is a person and the other is just a chrome paint job on evil god characters in non sci-fi genres, so it’s just a speculative dead end.

AI in real life is much more interesting and its proliferation makes fictional AI seem even more bland. Real life AI is first and foremost not intelligent and probably not even close, that said we have no rubric to grade it by because we don’t even really know what intelligence is yet. That said, machine learning algorithms highlight patterns in the world and in our behaviors that are fascinating just because they show just how complicated the world and people are in ways our brains just passively process. Kind of like how QWOP highlights just how difficult and complicated walking is.

[–] Sibyls@lemmy.ml 2 points 2 days ago (1 children)

It's given me an idea of how we get there. Clearly, modern LLMs aren't near the level as seen in movies, but we will get there. We will move on from LLMs within a few years to a more adaptive model, as we further increase our understanding of AI and neural networks.

I see modern LLMs as task tools, they can interpret our requests to pass onto a more intelligent model type which will save processing power needed from the newer AIs.

People in this thread seem to have a lot of bias, they can't see how the tech will evolve. You need to keep an open mind and look at where tech is being developed, with AI, it will be new architectures.

Their bias is a direct response to the rhetoric from the 'leaders' of the AI industry, who have collected billions of dollars and turned it into BS expectations.

[–] yogthos@lemmy.ml 2 points 3 days ago

I think we might actually get Star Wars style droids in our lifetimes.

[–] Vanth@reddthat.com 2 points 3 days ago

It's more that the latest few months of AI-related sci-fi is oversaturated with Commentary^TM^ and Discourse^TM^ on the dangers of LLMs and I'm getting bored with it.

I had known about development of the technology for a couple years before it hit the mainstream. I have been completely unsurprised with how it has gone.

[–] seliaste@lemmy.blahaj.zone 1 points 2 days ago

My favoeite character is a robot, and while sometimes she sounds like an llm she's much more than that. She actually learns how humans are and it's beautiful and I love her

[–] Tenderizer78@lemmy.ml 2 points 3 days ago (1 children)

I now consider it stupid and destructive to treat AI as having emotion just because they act human.

[–] NKBTN@feddit.uk 3 points 3 days ago (2 children)

In other words, the bad guys in Blade Runner were right all along

[–] Nemo@slrpnk.net 5 points 3 days ago

Who're the bad guys in Blade Runner? The giant corporation that creates human-like entities only to enslave them?

[–] afb@lemmy.world 4 points 3 days ago

Blade Runner's a bit different since the replicants are flesh and blood, just not naturally born.

[–] hera@feddit.uk 2 points 3 days ago

This doesn't really answer the question but I was reading Asimov short story the other day "Belief" and it felt like he'd hit the nail on the head such a long time ago.

[–] Twakyr@feddit.org 1 points 2 days ago

Trade it for a pc?

[–] Tracaine@lemmy.world -1 points 3 days ago (2 children)

It hasn't. I don't know what am LLM is.

[–] davidgro@lemmy.world 4 points 2 days ago* (last edited 2 days ago)

It stands for Large Language Model, and that's what ChatGPT, Gemini, Grok, etc are. They are all LLMs. They are also called 'AI' (Artificial Intelligence) but they are not at all intelligent, they just match patterns and produce one word at a time like a very complex autocomplete in a phone keyboard.

They very often get facts wrong, but they are designed to sound confident and knowledgeable even when completely incorrect, which is a problem because humans tend to assume honesty.

[–] Reverendender@sh.itjust.works 1 points 3 days ago* (last edited 3 days ago)

Then how can you be sure you haven’t been influenced?

[–] hera@feddit.uk 0 points 3 days ago

Real life LLMs have shown me the potential for the world to be just as miserable and dystopia as in a lot of sci-fi but also if this is where we are now, then maybe most sci-fi doesn't take it far enough. People will stop thinking for themselves and rely on AI for everything and blindly believe what it tells them.