this post was submitted on 25 Nov 2024
56 points (95.2% liked)

Asklemmy

43990 readers
835 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy ๐Ÿ”

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] Tehdastehdas@lemmy.world 2 points 2 days ago

Me too, but I started from the context of AGI safety: eventually we will make a superintelligent machine without any wisdom, that could be given any moral system, even making paperclips.

So here's my draft of a logical path to maximal morality, and consequent values: https://www.quora.com/If-you-were-to-come-up-with-three-new-laws-of-robotics-what-would-they-be/answers/23692757

Of course it was downvoted to oblivion on LessWrong, probably because they believe: https://www.lesswrong.com/posts/NnohDYHNnKDtbiMyp/fake-utility-functions