this post was submitted on 10 Aug 2025
98 points (99.0% liked)

technology

23939 readers
224 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] yogthos@lemmygrad.ml 1 points 3 weeks ago (1 children)

It requires needs or wants to be self directed. The needs can also be externalized the way we do with LLMs today. The user prompt can generate a goal for the system, and then it will work to accomplish it. That said, I entirely agree systems that are self directed are more interesting. If a system has needs that result in it wanting to maintain homeostasis, such as maintaining an optimal energy level, then it can act and learn autonomously.

[–] Awoo@hexbear.net 2 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

The user prompt can generate a goal for the system, and then it will work to accomplish it.

Ok but how is it getting intelligent before the user prompt?

The AI isn't useful until it is grown and evolved. I'm talking about the earlier stages.

[–] yogthos@lemmygrad.ml 1 points 3 weeks ago

We can look at examples of video generating models. I'd argue they have to have a meaningful and persistent representation of the world internally. Consider something like Genie as an example https://deepmind.google/discover/blog/genie-3-a-new-frontier-for-world-models/

It doesn't have volition, but it does have intelligence in the domain of creating consistent simulations. So, it does seem like you can get a domain specific intelligence through reinforcement training.