Lugh

joined 1 year ago
MODERATOR OF
[–] Lugh 7 points 7 months ago* (last edited 7 months ago) (2 children)

I really enjoy Liu Cixin's 'The Three Body Problem', but like a lot of sci-fi, I think it fails as a good description of a likely future. That's because it's structured for good dramatic storytelling. It has 'special' heroes, born with unique destinies who are on hero's journeys, and those journeys are full of constantly escalating drama and conflict. Great Screenwriting 101, but a terrible model of actual reality.

If simple microbial life is common in the Universe, with current efforts, we will likely find it in the 2030s. Real 'first contact' will be nothing like the movies.

[–] Lugh 10 points 7 months ago

I'm fascinated by the dynamic that is going on at the moment with the AI investor hype bubble. Billions are being poured into companies in the hope of finding the next Big Tech giant, meanwhile, none of the business logic that would support this is panning out at all.

At every turn, free open-source AI is snapping at the heels of Big Tech's offerings. I wonder if further down the road this decentralization of AI's power will have big implications and we just can't see them yet.

[–] Lugh 2 points 7 months ago* (last edited 7 months ago)

No one seems much nearer to fixing LLM's problems with hallucinations and errors. A recent DeepMind attempt to tackle the problem, called SAFE, merely gets AI to be more careful in checking facts via external sources. No one seems to have any solution to the problem of giving AI logic and reasoning abilities. Even if Microsoft builds its $100 billion Stargate LLM-AI, will it be of much use without this?

The likelihood is AGI will come via a different route.

So many people are building robots, that the idea these researchers talk about - embodied cognition - will be widely tested. But it may be just as likely the path to AGI is something else, as yet undiscovered.

[–] Lugh 10 points 8 months ago (2 children)

There was a moment about 12 months ago when people thought OpenAI was going to be one of the big tech giants of the 21st century. In reality, it's barely ahead of anyone, and there are numerous free open-source models snapping at its heels. Some people think big tech will still have the edge because they can afford to scale their AI by having the money for data centers. Perhaps, but I wonder if the biggest effect in the long run is the wide dispersal of AI across many places around the globe and not the power data centers give to one or two players.

[–] Lugh 10 points 8 months ago (2 children)

This is an interesting idea with a lot of merit. It centers around the idea of panspermia. That is, rocks and dust ejected from planets in asteroid impacts could carry the spores of simple life to other planets, perhaps even in other solar systems.

These life forms may not have the biochemistry we expect. We think of oxygen atmospheres as a reliable biosignature, but there may be thriving life that doesn't produce or need that.

But they would have "something" in common, and it would be clustered around the orbits and dispersal of the ejecta that colonized these different worlds via panspermia. What this proposal is suggesting is to look for clusters of exoplanets with any common characteristics in these dispersal patterns, and zero in on them for closer examination.

[–] Lugh 5 points 8 months ago* (last edited 8 months ago)

Here's a link to the report - they only mention UBI once, and then to say it is the wrong strategy.

[–] Lugh 7 points 8 months ago (3 children)

Although we can't know what the economic system will be like after the day arrives that AI & robots can do all work (even future uninvented jobs), but are always getting cheaper than us - it's safe to make some guesses.

This is a policy document from left-leaning progressive economists, and it only mentions UBI to say it not the right strategy. Instead, they recommend the government take control of creating new jobs. Some people assume UBI will happen, but I wonder if the outlook in this report is more likely.

I'd guess the first response to this issue will be a compromise between right and left. Those on the right will want to prevent a collapse of the financial system, preserve the rich's wealth, and maintain at least a pretense of free-market economics.

We'll get to a point (probably a systematic financial crisis like 2008) where the right will be forced to do something to protect the wealthy. Which will they agree to first - UBI or government-created jobs?

I think it's a strong possibility it's the second option, and enough left-wing politicians will be happy with that so that UBI gets pushed to the background.

[–] Lugh 9 points 8 months ago* (last edited 8 months ago) (1 children)

These tests confirmed, it is claimed, that key technological hurdles have been overcome to allow the reactor to be sent to space

Lockheed Martin in the US is also working on similar tech.

Interestingly, they refer to this as 'expandable' to the size of a 20-storey building, yet capable of being launched on a rocket. Presumably, most of it will be some scaffolding or lattice-type structure for the heat-sink elements.

If the Chinese or Lockheed Martin researchers pull this off, it's bye-bye to the idea of SpaceX's Starship for Earth-Mars travel.

Considering how long nuclear fission reactors have been powering submarines and large ships (that started in the 1950's) it's strange it's taken them this long to get to space, where they have such obvious advantages over chemical rockets. There's no indication when this Chinese reactor will be tested in space though.

[–] Lugh 5 points 8 months ago (5 children)

This article is remarkable for two reasons. First, three of the economists it interviews work for an MIT-affiliated body called The Institute for the Future of Work. It is their job to be global leaders in thought on this issue. They are the best of the best the academic discipline of Economics has to offer. Secondly, this article puts the central question to them unequivocally. - "What happens after AI/robots are capable of doing all work (even new as yet uninvented jobs), but are much cheaper to employ than humans?"

The central argument they're replying to is that an economic concept called comparative advantage means jobs are safe from AI. The argument is that as computers will remain a scarce resource, super-intelligent AI won't want to waste resources on doing "lesser" work, and will leave that to less capable humans. It hasn't occurred to the economist proposing this idea that computing won't be a scarce resource as it's always getting cheaper, and more powerful.

One, David Autor, says AI won't ever be better than humans, instead, it will give all humans new skills, so that everyone will have the economic advantages the highly educated now have. Another, Ethan Mollick, is at least honest in admitting economists don't know what the future will be like when AI can do all work. One of the IFTFW economists, Pascal Restrepo, agrees with the idea of comparative advantage. He says AI will create vast wealth, but even the crumbs the owners of that AI give to humans to do the lesser work AI is not interested in will make us all richer than we are today.

So in summary. One economist who isn't aware computers are getting cheaper. Another who doesn't think AI will get better. Another who doesn't know what will happen (the most honest of the bunch). And another who's cheerful about the prospect of our future economy being a new feudalism where AI's owners benignly let the peasants (everyone else) subsist on the leftover crumbs they don't want.

[–] Lugh 4 points 8 months ago* (last edited 8 months ago)

We know it's technically possible, the question is if it's economically worthwhile. If space launch from Earth is cheap, the further out that date is. Reusable rockets and railgun launches could make launches much cheaper. On the other hand, when it comes to building large-scale structures in space, asteroid mining makes sense, but how far away is that day?

Precious metals like gold/platinum would be easier to get hold of there. But then if you flood the market back on Earth with them, they start to go down in value. Gold is only valuable now because it's rare.

[–] Lugh 189 points 8 months ago (32 children)

Good news for pigs. I'll be delighted to see factory farming disappear and be replaced by tech like this.

[–] Lugh 11 points 8 months ago (19 children)

It amazes me that as famous as the concept of the tech singularity has become, how little its implications enter most people's thoughts. When most people talk about the future, they do it without any regard for its implications. Even more amazingly, when it comes to academics and intellectuals paid to think about the future, almost none of them ever do. I've yet to see an Economist who seems to know about the concept. When Economists make predictions about the effect of technology on our economic future, they are far more likely to reference trends from the early 20th, or even 19th century.

I suspect all the problems and opportunities the tech singularity will create won't be dealt with in advance in a planned orderly fashion. Rather it will be like March 2020 with Covid, and suddenly we'll be scrambling for emergency responses to a brand new reality.

view more: ‹ prev next ›