this post was submitted on 24 Sep 2023
98 points (95.4% liked)
Asklemmy
43945 readers
639 users here now
A loosely moderated place to ask open-ended questions
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Run simulations on what the best system of governance would be. You’d want to test across different cultures/countries/technological eras to get an idea of what the most resilient would be, maybe you’d get different results depending on what you were testing. Even the definition of “best system” would need alot of clarification.
An AI would decide that an AI-driven dictatorship would be most effective at implementing whatever goals you gave it.
You’d obviously need to give it constraints such as “administrable by humans” and if you’re looking at different technological eras, AI wouldn’t be available to something like 99% of humanity.
It wouldn't be the worst idea to come out of it, to be honest.
Why bother with simulations of governance systems and not governance itself at that point?
I do understand "the risk" of putting AI being the steering wheel but if you're already going to be trusting it this far, the last step probably doesn't actually matter.
That leaves too much room for subjective interpretation - like ultimately the answer as to what system of governance will last the longest in a steady state will ofc be to kill all humans (bc that lasts for infinite time, and you can't beat that kind of longevity!), while if you add the constraint that at least some must remain alive, it would be to enslave all humans (bc otherwise they'll find some way to mess everything up), and if there is something added in there about being "happy" (more or less) then it becomes The Matrix (trick them into thinking they are happy, bc they cannot handle any real responsibility).
Admittedly, watching the USA election cycle (or substitute that with most other nations lately; or most corporate decisions work just as well for this) has made me biased against human decision making:-P. Like objectively speaking, Trump proved himself to be the "better" candidate than Hillary Clinton a few years ago (empirically I mean, you know, by actually winning), then he lost to Biden, but now there's a real chance that Trump may win again, if Biden continues to forget which group he is addressing and thus makes it easy to spin the thought that he is so old as to be irrelevant himself and a vote for him is in reality one for Kamala Harris (remember, facts such as Trump's own age would only be relevant for liberals, but conservatives do not base their decisions based on such trifling matters, it's all about "gut feelings" and instincts there, so Biden is "old" while Trump is "not" - capiche?). Or in corporate politics, Reddit likewise "won" the protests.
Such experiments are going on constantly, and always have been for billions of years, and we are what came out of that:-D. Experiments with such socioeconomics have only gone on for a few thousand, but it will be interesting to see what survives.