this post was submitted on 14 Jun 2024
45 points (97.9% liked)

Futurology

1814 readers
31 users here now

founded 1 year ago
MODERATORS
top 4 comments
sorted by: hot top controversial new old
[–] JoMiran@lemmy.ml 8 points 5 months ago
[–] credo@lemmy.world 7 points 5 months ago (1 children)

The update corrects an error in the software that “assigned a low damage score” to the telephone pole

[…]

Waymo vehicle was driving to a passenger pickup location through an alley that was lined on both sides by wooden telephone poles. The poles were not up on a curb but level with the road and surrounded with longitudinal yellow striping to define the viable path for vehicles. As it was pulling over, the Waymo vehicle struck one of the poles at a speed of 8mph

It seems the vehicle treated the polls as road debris, etc. This is the plastic bag dilemma. Do you treat something you don’t recognize as a sacred object that must be avoided, or drive through it. This comes up a lot with machine learning based identification of objects - everything is given a percentage of assurance of its identity and nothing is ever 100% guaranteed. That’s a statistical property. Also, every item must have a closely related set of images to model that object in that situation. In this case, a bunch of telephone polls with yellow striping around them seem to have confused the car.

[–] where_am_i@sh.itjust.works 1 points 5 months ago (1 children)

Yeah, or you use actual 3D scene reconstruction from lidar or else and absolutely do know "there's a big massive thing in front of me, 101% a good action is not to drive into it".

Instead you softmax some random BS and fingers crossed.

[–] credo@lemmy.world 3 points 5 months ago

This is the plastic bag reference. LIDAR cannot determine mass. A lot of cars were jamming on the brakes several years ago every time a plastic bag floated in front of their field of view. The algorithms were then tweaked in an attempt to prevent a 20 car pileup because the car freaked out about 1 oz of air-filled plastic. Humans make assessments like this on the fly based on our knowledge of physics, an understanding of real-time conditions, and some level estimation. We may even choose to ignore road markings and normal driving rules if we deem the risk too great vs. the risk of causing a secondary incident (pileup, attention of police, etc). This is not to say meat sacks are exactly perfect in these types of analyses either.. This is the tweaking the ML engineers are trying to perfect, for all possible scenarios. A difficult undertaking for humans and machines alike.