OpenStars
I used to think that. Now I think that even if robots (more properly I mean a true artificial sentience) were to ever replace humanity, then they too could just as easily fall prey to the same effects that plague us, just b/c they abut natural laws encoded into the physics of the universe.
One issue I take with what you are saying is that the value judgements depend on what you are measuring the ideal against. Whereas, from a "survival of the fittest" (or even "survival of what happened to survive") standpoint, then Genghis Khan is one of the most successful people who ever lived, alongside the "mitochondrial Eve" and the "Y-chromosomal Adam" (yes those are real biological terms, though they are separated by at least a few hundred thousand years and both iirc were pre-Homo sapiens).
Mathematical game theory shows us that cheaters do prosper, at least at first, before they bring down the entire system around them. Hence there is a "force" that pulls at all of us - even abstract theoretical agents with no instantiation in the real world - to "game the system", and that must be resisted for the good of society overall. But some people (e.g. Putin, Trump, Jeff Bezos) give in to those urges, and instead of lifting themselves up to live in society, drag all of society down to serve them. What Google did to the Android OS is a perfect example of people corrupting that open source framework, twisting and perverting it into almost a mockery of its former self. For now, it is still "free", especially in comparison to the walled garden of its chief competitor, but that freedom is a shadow of what was originally intended, it looks like to me (from the outside).
So I am giving up on "idealism", and instead trying to be more realistic. I don't know what that means, unfortunately, so I literally cannot explain it better than that - but something along the lines of knowing that people will corrupt things, what will my own personal response be to the process? e.g., as George Carlin suggested, should I just recuse myself from voting entirely, or (living in the USA as I do) have things changed since then, and whereas before the two sides were fairly similar, nowadays it is important to vote not for the side of corruption, but against the side of significantly worse destruction, including of the entire system? (which arguably even needs to be destroyed, except if that happens in that manner, it is likely to lead to something far, far worse)
Anyway, yeah it is far worse than that, and I find it the height of irony that people, who absolutely ~~cannot~~ refuse to take care of ourselves, are now looking to make robots/AI, who we seem to be hoping will do a better job of that than we (won't) do? It is the absolute "daddy please save me" / cry for a superhero / savior, as always, abrogating responsibility to do anything to someone else to "like: just fix all the stuff, and junk, ya' know whaddi mean?" And therefore we fear robots (& AI) - as we should, b/c we know already what we (humans) are willing to do to one another, and thus we fear what they (being "other") might do to us as well. I am saying that it is our own corruption that we fear, mirror-reflected/projected onto them.
(he said the quiet part out loud:-P)
B/c Reddit has decided that you needed moar Reddit, so that you can haz Reddits while you are also Redditing.
That, or they are counting on someone, somewhere, to be dumb enough to fall for it.
Watch as people absolutely do. :-(
Watch as he burns through it in one:-).
Also, bold of us to assume that he has not already spent it:-P.
I dunno - personally I think his age is a whole third thing. There's: (a) I am alive and healthy and well and President, there's (b) I am dead now, so control passes over to my VP, while I go sit in my coffin for... forever, and then (c) I am old and so while I seem to be health enough on most days, I am also unreliable in that at any time I could zone out and miss a crucial detail in a meeting, before having to make decisions with literally nuclear consequences.
A vote for Biden would hopefully, in practical terms, translate into a vote to separate out the Commander-In-Chief role (that needs a much more active presence than e.g. a mere speaker behind a podium) from the other aspects of the Presidency, except that is simply not how the Presidency is defined.
And, I should point out, the former at least has some hope of working out the way we might want it to, whereas for Trump there is no hope at all that he would not hold on tightly to the reigns of power, and he could literally decide to assassinate someone on a mere whim, again.
Right, I did not explain myself well there - in the past I have gone to greath lengths to say all of that, but people still downvoted even the tiniest, most mundane, even if delivered in a three-quarters joking manner, that "Biden is old". Also, I may have been condescending after having been challenged by people who did not really want me to answer their questions - and that part would be on me:-).
For more info, check out Jon Stewart's position.
Except that's not what I said - it is a reaction to what people fear that I might have meant. And I understand that, I do, but it is still not what I said, and sometimes (not here though) I have even gone to great lengths to try to painstakingly say that that was explicitly not what I meant.
Btw, please allow me to clarify that I am not saying anything against you personally - you have been nothing but polite and helpful here.
A straw man argument, sometimes called a straw person argument or spelled strawman argument, is the logical fallacy of distorting an opposing position into an extreme version of itself and then arguing against that extreme version.
I also have fears. And they go beyond Biden v. Trump in the next election. I fear that we are losing the ability to even so much as talk in a halfway civilized manner about political matters. Not that downvotes are themselves the same as impoliteness...:-)
But my main point here isn't about the downvotes, it is that people are doing the latter b/c of being driven by their extreme fear of what is to come. Which ironically seems to be the one thing that is shared in common amongst all Americans right now - the only real difference for most people being which set of facts you choose to believe in.
Algorithms, great idea, horrible in practice.
Tbf, it is not the computer's fault - someone made it do that, and that same someone is the type to call a landline phone just as you sit down to family dinner (Leave It To Beaver style - at least I assume they did that in that show:-), or to literally knock on your literal door and try to sell you a vacuum cleaner or whatever - i.e. it is pure human greed, and the algorithm is just their latest tool in the toolbox to accomplish that.
Anyway, algorithms can be used for good too, if we wanted them to. Asimov for instance prompted three laws of robotics including foremost among them that robots would be allowed to do no harm - which is itself and interesting proposition bc like how else would a doctor perform surgery if it couldn't cut into a patient, or like what if a robot absolutely refuses to allow humans to commit suicide, or even to die in any way despite having lived for thousands (millions?) of years already? (It would become pure torture at some point!) To do a good or evil act, something needs to have "agency", but right now algorithms are purely tools to reach some externally defined end.
The SEC got its funding slashed by Trump - are they like the IRS now where they don't have the resources to truly do the job anymore?
One thing that trips me up is that even if at best someone SUCCEEDS in developing such an AI, even one that can essentially replace humanity (in whatever roles), what then would become of us afterwards? Wall-E tells a poignant and, to me at least, extremely realistic portrait of what we would do in that eventuality: sit down and never even so much as ever bother to stand up again. With all of our needs and every whim catered to by a slave force, what use would there even be to do so?
Star Trek was only one possible future, but how many would have the force of will or mind, and then be backed up by enough someones capable of enacting such a future, much less building it up from scratch? Also, it is best to keep in mind how that society was (1) brought back from an extinction-level event, which well-neigh almost destroyed the Earth (i.e., if it had been a tad bit more powerful it would have, thus it was by an extremely narrow margin that they escaped oblivion to begin with), followed by (2) meeting up with external beings who caused humanity to collect itself to face this new external pressure, i.e. they were "saved", by the aliens presence. Even though they managed to collect themselves and become worthy of it in the end, at the time it happened it was by no means an assured event that they would survive.
Star Wars, minus the Jedi, seems a much more likely, to my fatalism-tainted mind, where people are literally slaves to the large, fat, greedy entities who hoard power just b/c they can. Fighting against that takes real effort, which we seem unwilling to expend. Case in point: the only other option to Trump is... Biden, really!? Who has actually managed to impress me, doing far more than I had expected - though only b/c my expectations were set so low to begin with:-).
Some short stories if you are interested:
One is that I was a Reddit mod, for a small niche gaming sub. I stepped down. I guided the sub at a time when literally nobody else was willing to step up, and as soon as some people did, I stepped back, mostly just training them, and then when one more agreed I stepped out entirely. Perhaps it corrupted me, but apparently not too much - maybe b/c it was not "much" power?
Two, I cannot find the article right now b/c of enshittification of Google, but there are some fascinating studies showing that AIs do all sorts of crazy things, which supports how much of it is truly logical/rational behavior rather than crazy to begin with. One described a maze-running experiment where, once the "step cost" got to be high enough, the agent was trained to undertake higher & higher risks in order to just exit the maze ASAP - even if that meant finding the "bad"/"hell" rather than "good"/"heaven" exit. Like if good=+100 points, bad=-100 points, and the step cost is -10 points, with the goal being to maximum your score, then every 10 steps is equivalent to another "bad" exit. So like if you took 30 steps to find the good exit that is only -300+100=-200 points whereas if you took only 5 steps to find the bad exit that is -50-100=-150, which is overall higher than the good exit. Suicide makes sense, when living is pain and your goal is to minimize that, for someone who has nothing else to live for. i.e., some things seem crazy only when we do not fully understand them.
Three, this video messed me up, seriously. It is entirely SFW, I just mean that the thoughts that it espoused blew me away and I still have no idea how to integrate them into my own personal philosophy, or even whether I should... but the one thing I know for sure is that after watching it, I will never think the same way again.:-)