Lugh

joined 2 years ago
MODERATOR OF
 

One of the distortions of AI commentary is that so much of its focus is on Venture Capitalism. Because many people are incentivized to talk about where the big money is flowing, they ignore outside their bubble. Meanwhile, often the really significant things happen elsewhere.

With AI that 'really significant' thing - is that free open-source AI is the global future, far more than the VC darlings like OpenAI. Not that the people pouring hundreds of billions of dollars into the likes of OpenAI are likely to admit that.

There are more signs of this as recently as this week. Yet again, free open-source AI (in this the Qwen3 family from Alibaba) is not only equalling the best of the investor-funded AI, they are bettering it in some metrics.

The VC's thinking is that one of their bets will make big & generate trillions in revenue, but it seems hard to believe when all over the world people can pick up what you're trying to sell for free.

 

Waymo's peer-reviewed study in Traffic Injury Prevention, PDF, 58 pages found its self-driving cars safely drove 56.7 million miles across four U.S. cities without a human safety driver. With 80-90% level reduction for different types of accidents.

56.7 million miles is a tiny fraction of the overall US miles driven, only about 0.002%. Current self-driving AI wouldn't be as good for all road types and conditions. But it will get there, the only question is when. When it does that 80-90% reduction in accidents means 34,000 lives saved in the US, and hundreds of thousands globally - every single year.

The day is going to come where the public conversation is going to be about banning human driving, like no-seatbelts and indoor smoking before it. I've a suspicion the same people who said losing a few hundred thousand lives to 'herd immunity' will be telling us that those 34,000 dead a year are a price worth paying, so they don't have to change anything about their lives or routines.

[–] Lugh 7 points 9 months ago (3 children)

These brain-computer interfaces are usually discussed in the context of disabled and paralyzed people,  but I wonder what they could do for regular people as well.  It's interesting here to see how quickly the brain adapts to brand-new sensory information from the computer interface,  it makes you wonder what new ways we could interact with computers that we haven't thought of.

[–] Lugh 3 points 9 months ago* (last edited 9 months ago)

Pony.ai will be operating robotaxis at the Hong Kong International Airport as shuttles for airport employees

Airport trips seem like perfect territory for level 4 self-driving vehicles. Many of the journeys to and from airports are from well established pickup and drop off points.

[–] Lugh 16 points 9 months ago (2 children)

It wasn't so long ago, when people tried to refute the argument that AI and robotics automation would lead to human workers being replaced, they'd say - don't worry the displaced humans can just learn to code. There will always be jobs there, right?

[–] Lugh 8 points 9 months ago* (last edited 9 months ago)

The fundamental problem is this: we tend to think about democracy as a phenomenon that depends on the knowledge and capacities of individual citizens, even though, like markets and bureaucracies, it is a profoundly collective enterprise......................Making individuals better at thinking and seeing the blind spots in their own individual reasoning will only go so far. What we need are better collective means of thinking.

I think there is a lot of validity to this way of looking at things. We need new types of institutions to deal with the 21st century information world. When it comes to politics and information, much of our ideas and models for organizing and thinking about things come from the 18th and 19th century.

[–] Lugh 5 points 9 months ago* (last edited 9 months ago)

OpenAI is on a treadmill. It has vast amounts of investor billions pouring into it and needs to show results. Meanwhile, open source AI is snapping at its heels in every direction. If it is true that it is holding back on AI agents out of caution, I'm pretty sure that won't last long.

[–] Lugh 1 points 9 months ago

Interesting to see that the G1 is still aimed at developers and is not for mass market consumers. I wonder how long it will be before there is a layer of AI software on top of what it currently is, that means it can be more widely sold.

[–] Lugh 6 points 9 months ago

Thanks, we'll keep track of what they are doing.

[–] Lugh 2 points 9 months ago (1 children)

I misphrased, they are an Admin/Op, and essential.

[–] Lugh 8 points 9 months ago* (last edited 9 months ago) (1 children)

would it be enough to have those rules in place, and when reported actively remove the content as a mod?

We're pretty good with daily moderating of content on futurology.today, so I'd be confident we could cover that aspect.

However I'm wondering about federation issues. Are we liable for UK users who use their futurology.today account to access other instances we don't mod?

[–] Lugh 16 points 9 months ago* (last edited 9 months ago) (3 children)

the problem is that the guidance is too large and overbearing.

This.

Who gets to decide what "self-harm" is? There'll be some busybodies who'll say that any remotely positive messaging for LGBTQ youth is 'self-harm' for them.

[–] Lugh 4 points 9 months ago* (last edited 9 months ago)

It's interesting how this movement had its roots in left-wing thought, but has now been thoroughly co-opted by libertarian right-wing types. At its inception it was about tearing down society to start again, hopefully leading to something more equal afterwards.

There's still a lot of that radicalism about tearing down current society and restarting it, but I don't think most of the people who identify this way now really care very much about equality.

view more: ‹ prev next ›