CATL are the world's largest battery maker, with 37% global market share. They are also the technology leaders in this sector. You would expect them to be making the most advanced tech breakthroughs.
I wonder what the reasoning is behind this. If I were to guess, it's that their approach has been overtaken by events. All around the world, they have been superseded by a different approach. Relatively simple robot bodies that are made powerful by today's AI. Continuing with their approach might have been a classic case of sunk cost fallacy, it may be better to abandon it and join everyone else's new way of doing things.
There's a lesson here for today's tech leaders. As technological development accelerates, you can go from industry leader to has-been really fast. Though in fairness to Boston Dynamics, calling them 'has beens' isn't justified, they are still doing excellent work on their Spot & Stretch non-humanoid robots.
Some of the other humanoid robots in development around the world.
Added to this finding, there's a perhaps greater reason to think LLMs will never deliver AGI. They lack independent reasoning. Some supporters of LLMs said reasoning might arrive via "emergent behavior". It hasn't.
People are looking to get to AGI in other ways. A startup called Symbolica says a whole new approach to AI called Category Theory might be what leads to AGI. Another is “objective-driven AI”, which is built to fulfill specific goals set by humans in 3D space. By the time they are 4 years old, a child has processed 50 times more training data than the largest LLM by existing and learning in the 3D world.
This is based on findings from a pilot study that looked at logistics from the Port of Los Angeles to wider Southern California.
It's a reminder that the barriers to switching to 100% renewable energy aren't technological, but ultimately political. We're choosing to go at the speed we're at to end fossil fuel use. If we choose to eradicate them faster, then we could.
This sounds like marketing hype. Giving AI reasoning is a problem researchers have been failing to solve since Marvin Minsky in the 1960s, and there is still no fundamental breakthrough on the horizon. Even DeepMind's latest effort is tame; it just suggests getting AI to check itself more accurately against external sources.
World oil demand still hasn't peaked. Almost 80% of the growth in demand is coming from China. However, it's leading the world in the transition to EVs. 35% of new car sales there are now EVs. We know "peak oil" will be soon, will it be 2024?
There are so many counter-narratives in the media about the energy transition, that sometimes its true progress takes you by surprise. Getting rid of one-third of fossil fuel capacity in only two years is impressive.
I hope these 2035 goals are achievable. One in four new car sales in the EU are now EVs. That transition might be quicker than some expected. I hope the renewable energy needed to power all those cars is being factored into plans.
Figure says they are building the world's first commercially viable autonomous humanoid robot, but I wonder if UBTech will get there before them. In most Western countries we've allowed our manufacturing capacity to be hollowed out; China has formidable advantages when it comes to building and deploying these robots in their millions.
Figure's and UBTech's robots look like they are already capable of useful work. Based on these demos it looks like they could do a wide variety of simple unskilled work - stacking supermarket shelves, cleaning, warehouse work, etc
I wonder how soon people will be able to buy one of these.
I really enjoy Liu Cixin's 'The Three Body Problem', but like a lot of sci-fi, I think it fails as a good description of a likely future. That's because it's structured for good dramatic storytelling. It has 'special' heroes, born with unique destinies who are on hero's journeys, and those journeys are full of constantly escalating drama and conflict. Great Screenwriting 101, but a terrible model of actual reality.
If simple microbial life is common in the Universe, with current efforts, we will likely find it in the 2030s. Real 'first contact' will be nothing like the movies.
I'm fascinated by the dynamic that is going on at the moment with the AI investor hype bubble. Billions are being poured into companies in the hope of finding the next Big Tech giant, meanwhile, none of the business logic that would support this is panning out at all.
At every turn, free open-source AI is snapping at the heels of Big Tech's offerings. I wonder if further down the road this decentralization of AI's power will have big implications and we just can't see them yet.
No one seems much nearer to fixing LLM's problems with hallucinations and errors. A recent DeepMind attempt to tackle the problem, called SAFE, merely gets AI to be more careful in checking facts via external sources. No one seems to have any solution to the problem of giving AI logic and reasoning abilities. Even if Microsoft builds its $100 billion Stargate LLM-AI, will it be of much use without this?
The likelihood is AGI will come via a different route.
So many people are building robots, that the idea these researchers talk about - embodied cognition - will be widely tested. But it may be just as likely the path to AGI is something else, as yet undiscovered.
Any time I hear claims that involve hitherto unknown laws of Physics I'm 99.99% sure I'm dealing with BS - but then again, some day someone will probably genuinely pull off such a discovery.