Lugh

joined 2 years ago
MODERATOR OF
 

Courtesy of Reddit user /u/TheBlueRefinery29

Interesting experimental logical AI has promising implications for AI Safety.

Claiming to have created a language that enables developers to create software and AI that can reason over its own future versions.

Original post: https://x.com/TauLogicAI/status/1841813606154793354

Abstract Summarizing their process and the language tech: https://tau.net/Logical-AI-Software-Specification-Reasoning-GSSOTC.pdf

Full paper: https://tau.net/Theories-and-Applications-of-Boolean-Algebras-0.25.pdf

But the full paper is super long and goes over my head, the abstract is much easier to digest.

[–] Lugh 33 points 1 year ago (21 children)

Added to this finding, there's a perhaps greater reason to think LLMs will never deliver AGI. They lack independent reasoning. Some supporters of LLMs said reasoning might arrive via "emergent behavior". It hasn't.

People are looking to get to AGI in other ways. A startup called Symbolica says a whole new approach to AI called Category Theory might be what leads to AGI. Another is “objective-driven AI”, which is built to fulfill specific goals set by humans in 3D space. By the time they are 4 years old, a child has processed 50 times more training data than the largest LLM by existing and learning in the 3D world.

[–] Lugh 8 points 1 year ago

This is based on findings from a pilot study that looked at logistics from the Port of Los Angeles to wider Southern California.

It's a reminder that the barriers to switching to 100% renewable energy aren't technological, but ultimately political. We're choosing to go at the speed we're at to end fossil fuel use. If we choose to eradicate them faster, then we could.

[–] Lugh 10 points 1 year ago (3 children)

This sounds like marketing hype. Giving AI reasoning is a problem researchers have been failing to solve since Marvin Minsky in the 1960s, and there is still no fundamental breakthrough on the horizon. Even DeepMind's latest effort is tame; it just suggests getting AI to check itself more accurately against external sources.

[–] Lugh 6 points 1 year ago (1 children)

World oil demand still hasn't peaked. Almost 80% of the growth in demand is coming from China. However, it's leading the world in the transition to EVs. 35% of new car sales there are now EVs. We know "peak oil" will be soon, will it be 2024?

[–] Lugh 14 points 1 year ago (1 children)

There are so many counter-narratives in the media about the energy transition, that sometimes its true progress takes you by surprise. Getting rid of one-third of fossil fuel capacity in only two years is impressive.

I hope these 2035 goals are achievable. One in four new car sales in the EU are now EVs. That transition might be quicker than some expected. I hope the renewable energy needed to power all those cars is being factored into plans.

[–] Lugh 1 points 1 year ago* (last edited 1 year ago)

Figure says they are building the world's first commercially viable autonomous humanoid robot, but I wonder if UBTech will get there before them. In most Western countries we've allowed our manufacturing capacity to be hollowed out; China has formidable advantages when it comes to building and deploying these robots in their millions.

Figure's and UBTech's robots look like they are already capable of useful work. Based on these demos it looks like they could do a wide variety of simple unskilled work - stacking supermarket shelves, cleaning, warehouse work, etc

I wonder how soon people will be able to buy one of these.

[–] Lugh 7 points 1 year ago* (last edited 1 year ago) (2 children)

I really enjoy Liu Cixin's 'The Three Body Problem', but like a lot of sci-fi, I think it fails as a good description of a likely future. That's because it's structured for good dramatic storytelling. It has 'special' heroes, born with unique destinies who are on hero's journeys, and those journeys are full of constantly escalating drama and conflict. Great Screenwriting 101, but a terrible model of actual reality.

If simple microbial life is common in the Universe, with current efforts, we will likely find it in the 2030s. Real 'first contact' will be nothing like the movies.

[–] Lugh 10 points 1 year ago

I'm fascinated by the dynamic that is going on at the moment with the AI investor hype bubble. Billions are being poured into companies in the hope of finding the next Big Tech giant, meanwhile, none of the business logic that would support this is panning out at all.

At every turn, free open-source AI is snapping at the heels of Big Tech's offerings. I wonder if further down the road this decentralization of AI's power will have big implications and we just can't see them yet.

[–] Lugh 2 points 1 year ago* (last edited 1 year ago)

No one seems much nearer to fixing LLM's problems with hallucinations and errors. A recent DeepMind attempt to tackle the problem, called SAFE, merely gets AI to be more careful in checking facts via external sources. No one seems to have any solution to the problem of giving AI logic and reasoning abilities. Even if Microsoft builds its $100 billion Stargate LLM-AI, will it be of much use without this?

The likelihood is AGI will come via a different route.

So many people are building robots, that the idea these researchers talk about - embodied cognition - will be widely tested. But it may be just as likely the path to AGI is something else, as yet undiscovered.

[–] Lugh 10 points 1 year ago (2 children)

There was a moment about 12 months ago when people thought OpenAI was going to be one of the big tech giants of the 21st century. In reality, it's barely ahead of anyone, and there are numerous free open-source models snapping at its heels. Some people think big tech will still have the edge because they can afford to scale their AI by having the money for data centers. Perhaps, but I wonder if the biggest effect in the long run is the wide dispersal of AI across many places around the globe and not the power data centers give to one or two players.

[–] Lugh 10 points 1 year ago (2 children)

This is an interesting idea with a lot of merit. It centers around the idea of panspermia. That is, rocks and dust ejected from planets in asteroid impacts could carry the spores of simple life to other planets, perhaps even in other solar systems.

These life forms may not have the biochemistry we expect. We think of oxygen atmospheres as a reliable biosignature, but there may be thriving life that doesn't produce or need that.

But they would have "something" in common, and it would be clustered around the orbits and dispersal of the ejecta that colonized these different worlds via panspermia. What this proposal is suggesting is to look for clusters of exoplanets with any common characteristics in these dispersal patterns, and zero in on them for closer examination.

[–] Lugh 5 points 1 year ago* (last edited 1 year ago)

Here's a link to the report - they only mention UBI once, and then to say it is the wrong strategy.

view more: ‹ prev next ›