this post was submitted on 30 Oct 2024
638 points (89.0% liked)

Technology

59555 readers
4537 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

OK, its just a deer, but the future is clear. These things are going to start kill people left and right.

How many kids is Elon going to kill before we shut him down? Whats the number of children we're going to allow Elon to murder every year?

you are viewing a single comment's thread
view the rest of the comments
[–] bluGill@fedia.io 39 points 3 weeks ago (7 children)

Driving is full of edge cases. Humans are also bad drivers who get edge cases wrong all the time.

The real question isn't is Tesla better/worse in anyone in particular, but overall how does Tesla compare. If a Tesla is better in some situations and worse in others and so overall just as bad as a human I can accept it. Is Tesla is overall worse then they shouldn't be driving at all (If they can identify those situations they can stop and make a human take over). If a Tesla is overall better then I'll accept a few edge cases where they are worse.

Tesla claims overall they are better, but they may not be telling the truth. One would think regulators have data for the above - but they are not talking about it.

[–] spankmonkey@lemmy.world 23 points 3 weeks ago (2 children)

Tesla claims overall they are better, but they may not be telling the truth. One would think regulators have data for the above - but they are not talking about it.

https://www.reuters.com/business/autos-transportation/nhtsa-opens-probe-into-24-mln-tesla-vehicles-over-full-self-driving-collisions-2024-10-18/

The agency is asking if other similar FSD crashes have occurred in reduced roadway visibility conditions, and if Tesla has updated or modified the FSD system in a way that may affect it in such conditions.

It sure seems like they aren't being very forthcoming with their data between this and being threatened with fines last year for not providing the data. That makes me suspect they still aren't telling the truth.

[–] billiam0202@lemmy.world 13 points 3 weeks ago (1 children)

It sure seems like they aren't being very forthcoming with their data between this and being threatened with fines last year for not providing the data. That makes me suspect they still aren't telling the truth.

I think their silence is very telling, just like their alleged crash test data on Cybertrucks. If your vehicles are that safe, why wouldn't you be shoving that into every single selling point you have? Why wouldn't that fact be plastered across every Gigafactory and blaring from every Tesla that drives past on the road? If Tesla's FSD is that good, and Cybertrucks are that safe, why are they hiding those facts?

[–] spankmonkey@lemmy.world 5 points 3 weeks ago

If the cybertruck is so safe in crashes they would be begging third parties to test it so they could smugly lord their 3rd party verified crash test data over everyone else.

Bu they don't because they know it would be a repeat of smashing the bulletproof window on stage.

[–] atempuser23@lemmy.world 6 points 3 weeks ago

One trick used is to disengage auto pilot when it senses and imminent crash. This would vastly lower the crash count shifting all blame to the human driver.

[–] ano_ba_to@sopuli.xyz 10 points 3 weeks ago

Being safer than humans is a decent starting point, but safety should be maximized to the best of a machine's capability, even if it means adding a sensor or two. Keeping screws loose on a Boeing airplane still makes the plane safer than driving, so Boeing should not be made to take responsibility.

[–] atempuser23@lemmy.world 6 points 3 weeks ago

Yes. The question is if the Tesla is better than a anyone in particular. People are given the benefit of the doubt once they pass the drivers test. Companies and AI should not get that. The AI needs to be as good or better than a GOOD human driver. There is no valid justification to allow a poorly driving AI because it's better than the average human. If we are going to allow these on the road they need to be good.

The video above is HORRID. The weather was clear, there was no opposing traffic , the deer was standing still. The auto drive absolutely failed.

If a human was driving in these conditions plowed through a deer at 60 mph and didn't even attempt to swerve or stop they shouldn't be driving.

[–] Semi_Hemi_Demigod@lemmy.world 6 points 3 weeks ago (1 children)

Humans are also bad drivers who get edge cases wrong all the time.

It would be so awesome if humans only got the edge cases wrong.

[–] xthexder@l.sw0.com 3 points 3 weeks ago

I've been able to get demos of autopilot in one of my friend's cars, and I'll always remember autopilot correctly stopping at a red light, followed by someone in the next lane over blowing right through it several seconds later at full speed.

Unfortunately "better than the worst human driver" is a bar we passed a long time ago. From recent demos I'd say we're getting close to the "average driver", at least for clear visibility conditions, but I don't think even that's enough to have actually driverless cars driving around.

There were over 9M car crashes with almost 40k deaths in the US in 2020, and that would be insane to just decide that's acceptable for self driving cars as well. No company is going to want that blood on their hands.

[–] NeoNachtwaechter@lemmy.world 5 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

If a Tesla is better in some situations and worse in others and so overall just as bad as a human I can accept it.

This idea has a serious problem: THE BUG.

We hear this idea very often, but you are disregarding the problem of a programmed solution: it makes it's mistakes all the time. Infinitely.

Humans are also bad drivers who get edge cases wrong all the time.

So this is not exactly true.

Humans can learn, and humans can tell when they made an error, and try to do it differently next time. And all humans are different. They make different mistakes. This tiny fact is very important. It secures our survival.

The car does not know when it made a mistake, for example, when it killed a deer, or a person, and crashed it's windshield and bent lot's of it's metal. It does not learn from it.

It would do it again and again.

And all the others would do exactly the same, because they run the same software with the same bug.

Now imagine 250 million people having 250 million Teslas, and then comes the day when each one of them decides to kill a person...

[–] bluGill@fedia.io 2 points 3 weeks ago (1 children)

Tesla can detect a crash and send the last minute of data back so all cars learn from is. I don't know if they do but they can.

[–] NeoNachtwaechter@lemmy.world 1 points 3 weeks ago

I don't know if they do but they can.

"Today on Oct 30 I ran into a deer but I was too dumb to see it, not even see any obstacle at all. I just did nothing. My driver had to do it all.

Grrrrrr.

Everybody please learn from that, wise up and get yourself some LIDAR!"

[–] scarabic@lemmy.world 1 points 3 weeks ago

Yeah there are edge cases in all directions.

When people want to say that someone is very rare they should say “corner case,” but this doesn’t seem to have made it out of QA lingo and into the popular lexicon.

[–] AA5B@lemmy.world 0 points 3 weeks ago* (last edited 3 weeks ago)

Given that they market it as “supervised”, the question only has to be “are humans safer when using this tool than when not using it?”

One of the cool things I’ve noticed since recent updates, is the car giving a nudge to help me keep centered, even when I’m not using autopilot