547
Zuckerberg hailed AI ‘superintelligence’. Then his smart glasses failed on stage | Matthew Cantor
(www.theguardian.com)
This is a most excellent place for technology news and articles.
The last 5% aren't a nice bonus. They are everything. A 95% self driving car won't do. Giving me random hallucinations when I try to look up important information won't do either even if it just happens 1 out of 20 times. That one time could really screw me so I can't trust it.
Currently AI companies have no idea how to get there yet they sell the promise of it. Next year, bro. Just one more datacenter, bro.
People tell me the hallucinations aren't a big deal because people should fact check everything.
My friend told me that one of her former colleagues, wicked smart dude, was talking to her about space. Then he went off about how there were pyramids on Mars. She was like, "oh ... I'm quite caught up on this stuff and I haven't heard of this info. Where can I find this info?" The guy apparently has been having super long chats with whatever LLMand thinks that they're now diving into the "truth" now.
Sounds like this idiot:
https://www.youtube.com/watch?v=TMoz3gSXBcY
Worse, since generating a whole bunch of potentially correct text is basically effortless now, you've got a new batch of idiots just "contributing" to discussions by leaving a regurgitated wall of text they possibly didn't even read themselves.
So not only those are not fact checking, when you point that you didn't ask for a LLM's opinion, they're like "what's the problem? Is any of this wrong?" Because it's entirely your job to check something they copy-pasted in 5 seconds.
So many posts on on social media are obviously AI generated and it immediately makes me disregard them but I'm worried about later stages when people make an effort to mask it. Prompt it to generate text without giveaways like dashes. Have intentional mistakes or a general lack of proper structure and punctuation in there and it will be incredibly hard to tell.
99% won't do when the consequences of that last 1% are sever.
There's more than one book on the subject, but all the cool kids were waving around their copies of The Black Swan at the end of 2008.
Seems like all the lessons we were supposed to learn about stacking risk behind financial abstractions and allowing business to self-regulate in the name of efficiency have been washed away, like tears in the rain.
As an example, your whole post is great but I can't help but notice the one tiny typo that is like 1% of the letters. Heck, a lot of people probably didn't even notice just like they don't notice when AI returns the wrong results.
A multi billion dollar technical system should be far better than someone posting to the fediverse in their spare time, but it is far worse. Especially since those types of tiny errors will be fed back into future AI training and LLM design is not and never will be self correcting because it works with the data it has and it needs so much that it will always include scraped stuff.
It should, but it cant. OpenAI just admitted this in a recent paper. It’s baked in, the hallucinations. Chaos is baked in to the binary technology.
I get to ride in lots of different cars as part of my job, and some of the new ones display the current speed limit on the dash. It is incorrect quite regularly. My view is if you can't trust it 100% of the time you can't trust it at all and you might as well turn it off. I feel the same about a.i.
The ADAC in new cars varies so much in implementation. None of it can be trusted (like you said, the sign recognition is iften wrong) but as a backup reminder it can be great. eg: lane centring etc. If it feels like its seizing control from me it can be terrifying. eg: automatic braking out of the blue.
Couple of examples from just this last week: I was on a multi-lane road with a posted 60km/h speed limit, and the car was trying to tell the driver it was 40, and beeped at them whenever they went over it. Another one complained about crossing the centreline marking because we were going around parked cars and there was no choice. Thankfully the car didn't seize control in those situations and just gave an audible warning, but if it had we'd have been in the pooh, especially that second one.
20 is also the number of times you go to work per month.
Now imagine crashing your car once every month...