# NTSB preliminary report into Uber SDC crash that killed Elaine Herzberg has been released



## jocker12 (May 11, 2017)

*The report is more interesting for what it doesn't say than what it does*
*The pedestrian had methamphetamine and marijuana in her system.*

The National Transportation Safety Board (NTSB) released its preliminary report into the fatal crash involving a self-driving Uber vehicle in Tempe, Arizona, last March. Among the findings, investigators say the vehicle decided it needed to brake 1.3 seconds before striking a pedestrian, but Uber had previously disabled the Volvo's automatic emergency braking system in order to prevent erratic driving.

The four-page report provides a detailed account of what happened that night on March 18th when an Uber test vehicle slammed into 49-year-old Elaine Herzberg, killing her. In some ways, the document is more notable for what it doesn't say than what it does. *The NTSB provides no analysis nor assigns any blame in Herzberg's death.* Much of the report has been previously reported, including the fact that Uber had disabled the Volvo XC90's factory settings for emergency braking and other driver assist features.

"All aspects of the self-driving system were operating normally at the time of the crash, and there were no faults or diagnostic messages," the NTSB says in its report.

The agency says it is continuing to work with Uber, Volvo, and the Arizona Department of Transportation as it prepares its final report, which is due in 2019. The Tempe Police Department concluded its own investigation this week, and has referred its findings to the Maricopa County attorney. A spokesperson for Uber declined to comment on the specifics of the report.

_"_Over the course of the last two months, we've worked closely with the NTSB," the Uber spokesperson said. "As their investigation continues, we've initiated our own safety review of our self-driving vehicles program. We've also brought on former NTSB Chair Christopher Hart to advise us on our overall safety culture, and we look forward to sharing more on the changes we'll make in the coming weeks."

So what went wrong? The NTSB timeline of the fatal crash is as follows:










At 9:14PM, Uber safety driver Rafaela Vasquez departs the garage to run an established test run.
Immediately before the crash, she is on her second loop, traveling 43 mph north on Mill Avenue. At that moment, the car had been in autonomous mode for the preceding 19 minutes.
At 9:58PM, Herzberg begins crossing east across Mill Avenue.
The vehicle's radar and LIDAR sensors detect an object in the road about six seconds before impact.
As their paths converge, the vehicle's self-driving software classifies Herzberg first as an unknown object, then as a vehicle, and finally as a bicycle, with varying expectations of the future travel path.
At 1.3 seconds before impact, the vehicle's computer decides that an emergency braking maneuver was needed. But Uber has disabled the Volvo's factory AEB system, "to reduce potential for erratic vehicle behavior." The system is not designed to alert the driver that braking is needed.
Vasquez intervenes "less than a second" before impact by grabbing the steering wheel. The car strikes Herzberg at a speed of 39 mph. Vasquez hits the brake less than a second after impact.









While we still don't know exactly why Uber's vehicle failed to brake, the NTSB cites this as the ride-hailing company's prevailing explanation for what went wrong:

According to Uber, the developmental self-driving system relies on an attentive operator to intervene if the system fails to perform appropriately during testing. In addition, the operator is responsible for monitoring diagnostic messages that appear on an interface in the center stack of the vehicle dash and tagging events of interest for subsequent review

Vasquez was seen in a video released by the Tempe Police Department glancing down in the seconds before impact. Vasquez told investigators she had been "monitoring the self-driving system interface," which is displayed on an iPad mounted on the vehicle's center console. She said both her personal and business phones were in the car, but that neither was in use until after the crash.

The report frames Herzberg's actions in the moments before the crash in a fairly negative light. Investigators note she was crossing the street outside the crosswalk, wearing dark clothing, and, *according to a post-crash toxicology report, had methamphetamine and marijuana in her system*. The NTSB also notes that the median on Mill Avenue where Herzberg was crossing the street was not illuminated by lighting and featured signage warning pedestrians not to cross there.

But others noted that the street design where Herzberg was struck sends pedestrians a mixed message. It features an inviting, X-shaped brick-paved walking path across the median, despite being in the middle of a busy road over 360 feet from the nearest crosswalk. A homeless encampment is located a few yards beyond the median, and locals have noted that pedestrians frequently cross the street at that spot.

Immediately after the crash in March, Uber suspended its autonomous vehicle testing across North America. Subsequently, Arizona governor Doug Ducey, who had been extremely friendly toward Uber, revoked the ride-hailing company's permission to test vehicles in the state. And yesterday, Uber said it was shuttering its test program in Arizona, laying off almost 300 safety drivers. Its self-driving program would be more scaled back and "limited" in the months to come, the company said.

Uber intends to resume its self-driving test later this year, and in some ways, today's report was the first hurdle the company needed to overcome before getting its cars back on the road. In a recent interview with _The Verge_, Uber CEO Dara Khosrowshahi said he had ordered an internal review of the company's self-driving program and would wait for those results before giving the green light to restart.

"The focus is like, just do the right thing so that I can be satisfied, the teams at [Uber's Advanced Technologies Group] can be satisfied that hopefully nothing like this ever happens again," he said. "You can't guarantee anything in life."

https://www.theverge.com/2018/5/24/17388696/uber-self-driving-crash-ntsb-report


----------



## iheartuber (Oct 31, 2015)

jocker12 said:


> *The report is more interesting for what it doesn't say than what it does*
> *The safety monitor had methamphetamine and marijuana in her system.*
> 
> The National Transportation Safety Board (NTSB) released its preliminary report into the fatal crash involving a self-driving Uber vehicle in Tempe, Arizona, last March. Among the findings, investigators say the vehicle decided it needed to brake 1.3 seconds before striking a pedestrian, but Uber had previously disabled the Volvo's automatic emergency braking system in order to prevent erratic driving.
> ...


Damn son... weed AND meth??!


----------



## jocker12 (May 11, 2017)

iheartuber said:


> Damn son... weed AND meth??!


The robot drives by itself, right? Why not let your guard down and enjoy the ride?


----------



## iheartuber (Oct 31, 2015)

jocker12 said:


> The robot drives by itself, right? Why not let your guard down and enjoy the ride?


The robot does drive itself. Except uber's Robots.

All other robots are ok. It's not the robots it's Uber.

Riiiiiiiiiight


----------



## jocker12 (May 11, 2017)

iheartuber said:


> The robot does drive itself. Except uber's Robots.
> 
> All other robots are ok. It's not the robots it's Uber.
> 
> Riiiiiiiiiight


And they are here, people love them, they save lives.... the present is the future. Few seconds ago, a flock of flying pigs chirping like cockatoos flew over the house.... There are no birds anymore now, because the pigs are beautifully doing the flying!


----------



## JimKE (Oct 28, 2016)

jocker12 said:


> *The report is more interesting for what it doesn't say than what it does*
> *The safety monitor had methamphetamine and marijuana in her system.*
> 
> The National Transportation Safety Board (NTSB) released its preliminary report into the fatal crash involving a self-driving Uber vehicle in Tempe, Arizona, last March. Among the findings, investigators say the vehicle decided it needed to brake 1.3 seconds before striking a pedestrian, but Uber had previously disabled the Volvo's automatic emergency braking system in order to prevent erratic driving.
> ...


Just for a little clarification, this is NOT the NTSB report. It's a story from "The Verge," which is _obviously slanted in Uber's favor_ -- and to which you've boldfaced information about the toxicology report on the woman killed in the crash.

The thread I started links a story by Reuters which is more even-handed, and probably_ slanted against Uber_. Here's a link: https://www.reuters.com/article/us-...ize-pedestrian-brake-u-s-agency-idUSKCN1IP26K

The truth is probably somewhere in between the two stories.



iheartuber said:


> Damn son... weed AND meth??!


She was multi-tasking. Not good, but not deserving of capital punishment.

*****
The other thing that neither article mentions is that the Uber vehicle was speeding. If my memory is correct, I think the speed limit there is 35 MPH -- so the Uber was 8 MPH over (in AUTONOMOUS mode!). When split seconds count, so does speed.


----------



## jocker12 (May 11, 2017)

JimKE said:


> Just for a little clarification, this is NOT the NTSB report. It's a story from "The Verge," which is _obviously slanted in Uber's favor_ -- and to which you've boldfaced information about the toxicology report on the woman killed in the crash.
> 
> The thread I started links a story by Reuters which is more even-handed, and probably_ slanted against Uber_. Here's a link: https://www.reuters.com/article/us-...ize-pedestrian-brake-u-s-agency-idUSKCN1IP26K
> 
> ...


Here is the PRELIMINARY report from NTSB website (PDF document) - https://www.ntsb.gov/investigations/AccidentReports/Reports/HWY18MH010-prelim.pdf

TheVerge has it under their hyperlink on the first line of the article (click on the blue words "preliminary report").

"The posted speed limit was *45 mp*h" and "According to data obtained from the self -driving system , the system first registered radar and LIDAR observations of the pedestrian about 6 seconds before impact, when the vehicle was traveling at *43 mph*." and "The vehicle speed at impact was *39 mph*"

The robot did what was designed to do, and in similar conditions it will do the same. The software sequence for identifying false positives, recognizing shapes and edges without density and ignore them, is the problem that CANNOT be fixed by any developer. That is the only way to boost vehicle performance and avoid stopping for smoke, steam, fog or hay flying in the cars path. Every single company involved uses the same software procedure to read false positives, but nobody expressed initiative to change the software or alter the sequence, so more fatalities are to be expected.


----------



## iheartuber (Oct 31, 2015)

jocker12 said:


> Here is the PRELIMINARY report from NTSB website (PDF document) - https://www.ntsb.gov/investigations/AccidentReports/Reports/HWY18MH010-prelim.pdf
> 
> TheVerge has it under their hyperlink on the first line of the article (click on the blue words "preliminary report").
> 
> ...


If this is a problem that cannot be fixed by ANY developer, then tomatopaste , your beloved system is fundamentally flawed to such a degree that you will never see it unfold as you dream it.

Until now I just kinda guessed things like this would be a challenge for Waymo, but now here's the proof.


----------



## jocker12 (May 11, 2017)

iheartuber said:


> If this is a problem that cannot be fixed by ANY developer, then tomatopaste , your beloved system is fundamentally flawed to such a degree that you will never see it unfold as you dream it.
> 
> Until now I just kinda guessed things like this would be a challenge for Waymo, but now here's the proof.


Their goal is to reduce false positives detection to zero, in which situation the system will read false obstacles at a 100% rate, which is a rational goal to aim to.
Here is a slide from a presentation by Amnon Shashua (Mobileye CTO) at CVPR 2016










This slide shows you his projection of "natural growth of ADAS (Advance Driver Assistance Systems)" towards *zero false positives*.

The only important detail they need/want to discover, is what system sensitivity value will give them zero false positives and zero crashes at the same time (no false positives=ignore nonobjects, no crashes=detect real objects, both with 100% accuracy) .

The way they detect false positives is by having a solely designated neural network (which is part of the software) analyse consecutive video frames from a front camera, looking for shapes with surfaces and edges.

An existing shape will be a "_*positive*_".

A shape with rapidly fluctuating geometry (identified by analyzing pixels from the frames) or with a rapid and erratic movement (flying hay, paper or plastic bags on the road) will indicate that shape having no significant mass or density resulting into a "_*false*_". By combining the two results, the software will avoid braking and maintain speed on the designated path.

In Elaine Herzberg's death, imo was a combination of poor lightning (like this preliminary report states) and her running to cross the street in front of the car (*she definitely knew the vehicle was coming* because in the dark, no matter which direction you look at the human eye level, you will notice the lights and the shadows created around you). The report mentions how the software first identified her as an UNKNOWN OBJECT, then as a car, and then as a bicycle. From Figure 2 we see the car detected Elaine from 25 meters away, and at 43 mph the vehicle was covering 19.22 meters per second, so it had less than 1.5 seconds to react but never decided to try an evasive maneuver to avoid collision.

Developers dilemma regarding "false positives" is that they cannot have a competitive self driving cars software without it, but they cannot have the software with it either (analyzing the video images the way it does it today because it will generate the same error with the same consequences over and over again). So, they are stuck with only one single methodology of computer vision obstacle detection and avoidance.

Like I've said, they rushed the robots on the road with underdeveloped software on board, only because the companies are running out of patience and cannot invest anymore billions into this fantasy.


----------



## tomatopaste (Apr 11, 2017)

iheartuber said:


> If this is a problem that cannot be fixed by ANY developer, then tomatopaste , your beloved system is fundamentally flawed to such a degree that you will never see it unfold as you dream it.
> 
> Until now I just kinda guessed things like this would be a challenge for Waymo, but now here's the proof.


Nope. The weak link in all the so-called "driverless" deaths has been the human. Anytime someone dies in a Tesla with Autopilot engaged, Tesla blames the human. The only "self-driving" death other than in a Tesla was the Uber crash. What do both of these companies have in common? Both rely on investor cash to keep the lights on and both are cutting corners. Uber reduced the number of lidar sensors from 7 to only 1 and the number of backup drivers from 2 to 1. They also had no way to warn the human backup driver when the system sees a potential false positive. This is unconscionable.

Waymo drives almost a million miles a month on everyday roads with their self-driving cars and not so much as a fender bender.


----------



## iheartuber (Oct 31, 2015)

tomatopaste said:


> Nope. The weak link in all the so-called "driverless" deaths has been the human. Anytime someone dies in a Tesla with Autopilot engaged, Tesla blames the human. The only "self-driving" death other than in a Tesla was the Uber crash. What do both of these companies have in common? Both rely on investor cash to keep the lights on and both are cutting corners. Uber reduced the number of lidar sensors from 7 to only 1 and the number of backup drivers from 2 to 1. They also had no way to warn the human backup driver when the system sees a potential false positive. This is unconscionable.
> 
> Waymo drives almost a million miles a month on everyday roads with their self-driving cars and not so much as a fender bender.


Never mind the human who was supposed to be watching, the problem was with the tech because it didn't "recognize a person in the road". That's a problem for ALL robot cars. You're saying more Lidar sensors will fix that? Somehow that doesn't pass my smell test.

Also it takes less than 300 uber drivers to log 1 million miles a month- how many robo cars does it take?


----------



## tomatopaste (Apr 11, 2017)

iheartuber said:


> Never mind the human who was supposed to be watching, the problem was with the tech because it didn't "recognize a person in the road". That's a problem for ALL robot cars. You're saying more Lidar sensors will fix that? Somehow that doesn't pass my smell test.
> 
> Also it takes less than 300 uber drivers to log 1 million miles a month- how many robo cars does it take?


No, it's obviously not. Waymo has more than 6 million self-driving miles, no accidents. No deaths. If a human driver is a bad enough driver he loses his license. Should all human drivers lose their license because of that one bad driver?



iheartuber said:


> Also it takes less than 300 uber drivers to log 1 million miles a month- how many robo cars does it take?


75


----------



## iheartuber (Oct 31, 2015)

tomatopaste said:


> No, it's obviously not. Waymo has more than 6 million self-driving miles, no accidents. No deaths. If a human driver is a bad enough driver he loses his license. Should all human drivers lose their license because of that one bad driver?
> 
> 75


You may be able to sell your line to the car geeks of "It was uber's Robo cars that killed that lady, not ALL robo cars... the Waymo cars are much safer!"

But it's a tough sell to the general public. They just see robo cars = death.

So.. good luck with that


----------



## tomatopaste (Apr 11, 2017)

iheartuber said:


> You may be able to sell your line to the car geeks of "It was uber's Robo cars that killed that lady, not ALL robo cars... the Waymo cars are much safer!"
> 
> But it's a tough sell to the general public. They just see robo cars = death.
> 
> So.. good luck with that


Are you saying people often have irrational fears based on inadequate information?


----------



## iheartuber (Oct 31, 2015)

tomatopaste said:


> Are you saying people often have irrational fears based on inadequate information?


I'm saying Waymo has an uphill battle ahead of them and if their public education strategy is akin to your attitude of "Relax, it's fine!" then they will be in serious trouble.


----------



## jocker12 (May 11, 2017)

iheartuber said:


> You may be able to sell your line to the car geeks of "It was uber's Robo cars that killed that lady, not ALL robo cars... the Waymo cars are much safer!"
> 
> But it's a tough sell to the general public. They just see robo cars = death.
> 
> So.. good luck with that


Apparently the tech didn't recognize the person/obstacle, but was not meant to do so. It was designed to evaluate whatever the sensors detected and decide if it needed to change what it was doing or not.

People are at fault, but not the ones inside the cars or on the roads. It is the DEVELOPERS (responsible for robot's software) and BUSINESS MANAGERS (responsible for the decision to push the robots on public roads).

Politicians and regulators are misinformed by the corporations and do not have the time to go deep into learning how these things are operating, what they can or can't do. On top of that, in order to "disrupt", corporations are only showing "the benefits" and NOT the dangers.

Let me explain.

Today, _computer vision is a highly advanced interdisciplinary field_. But, the term "advanced" it's *NOT in relation to the "absolute" or the "0% error rate"*. The term "advanced" it's used in relation to *what we had few years back*. So, the general public, the media and the regulators/politicians, are tricked into thinking an "advanced" field is something they see in Star Trek movies, and IS NOT. It is misleading terminology that is used by the scientific community (and is not done on purpose) to describe their work. When used by the corporations, this language artificially inflates the expectations because is unrealistic to the real world we live in. 

This is a basic activation map (image classification) from a convolutional neural network used by computer vision software to classify images and identify objects.










On the far right, we can see what the computer identifies of being represented in the picture. While "car" is the most identified object, there are still few errors where the object is mistakenly classified as a truck, an airplane, a ship or a horse. That shows you how the software has an error rate no matter how clear or close the object is.

Well, computer vision scientists compare their software error rate with human error rate (which is known to be 94% - see image below), but that is not the point when we discuss about autonomous cars. In driving environment, a robot needs to have zero error rate in order to make the difference and be TOLERATED by the general public and the regulators. And, because of image detection complexities, that is IMPOSSIBLE.

"Saving lives" slogan needs to be backed up by zero errors, because any error (while robots are controlled by the SAME or SIMILAR software) could *exponentially multiply to outrageously many errors* in a matter of seconds in similar conditions (like bad weather or extreme low or high temperatures let's say).

On top of this, one of the main questions is "how do you know the software correctly identified an obstacle" while there is no human to confirm that in real time. Considering the software could make a slight mistake, and instead of "a bird"(articulated movement) read "a plane"(non-articulated movement), how can we be sure it will correctly decide how to react to prevent possible impact. The outcome is obviously very different from hitting a bird to hitting a plane.

Here is an example of questionable accuracy for basic computer vision software










In conclusion, I think corporate business people (like Elon Musk or John Krafcik - which are not scientists by any means) are primary at fault by deploying the self driving cars software to be used in vehicles on public roads endangering passengers and pedestrians lives alike.

It is about money and greed, not about progress.


----------



## tomatopaste (Apr 11, 2017)

jocker12 said:


> Let me explain.


Can you give us the Reader's Digest version?



iheartuber said:


> I'm saying Waymo has an uphill battle ahead of them and if their public education strategy is akin to your attitude of "Relax, it's fine!" then they will be in serious trouble.


And I'm saying they don't. All these polls showing 77 percent, 79 percent, 5 million percent, won't ride in a self-driving car are meaningless. Waymo will have more demand than they have seats when they launch. People will then realize, it works. How long will that take? Five days.


----------



## iheartuber (Oct 31, 2015)

tomatopaste said:


> Can you give us the Reader's Digest version?


The readers digest version is:

Anyone who says robot cars are safer than human driven cars or who says they're going to be here quickly because "progress is happening" is really just a greedy SOB who is thinking he is going to make some money from this and really doesn't give a rip about society.



tomatopaste said:


> Waymo will have more demand than they have seats when they launch.


"Waymo will gave more demand than seats"???

Ok Baghdad Bob












tomatopaste said:


> People will then realize, it works. How long will that take? Five days.


Now I can't wait til Waymo launches because now all I have to do is wait 5 days after the launch and say "So Tomato, how come the ridership is still low?"

Then I'll ask the same question after a month, two months, six months, etc. it will be EPIC!


----------



## jocker12 (May 11, 2017)

iheartuber said:


> Now I can't wait til Waymo launches because now all I have to do is wait 5 days after the launch


This is entirely correct!................. Unfortunately.

That's why they chose safe places and did the heavily 3D mapping for them.

Edit.
This is what I am talking about - "The word that jumps out is "erratic." That shouldn't be a surprise for anyone who's been involved in designing autonomous vehicles. *The safety-first approach* that AV developers should rightly be adopting when dealing with hazards *will naturally turn up a lot of false positives." *- the developers aim towards zero false positives

and

"The Center for Auto Safety and Consumer Watchdog this week called on the U.S. Federal Trade Commission to investigate what they called "dangerously misleading and deceptive advertising and marketing practices and representations" related to Tesla Inc.'s Autopilot feature."

and

"From its public statements, you'd certainly think that its vehicles can more or less drive themselves, with humans required only as a safety back-up while the system is in trial mode. Take this video from Uber's Advanced Technologies Group, posted in April 2017, showing the company's self-driving cars cruising smoothly around Pittsburgh. Or this one from October 2017: "We have hundreds of self-driving vehicles out in the world," the narrator says. The car's autonomous systems "make sure the vehicle's aware of everything around it, like the stop sign up ahead, that woman crossing the street, and the cyclist coming up behind them." - the language is strikingly similar to what Waymo (and all the other culprits) says to the media and the general public. AND THAT IS CRAZY DANGEROUS.

from https://www.bloomberg.com/view/articles/2018-05-25/uber-s-ghost-in-the-self-driving-machine-exposed


----------



## tomatopaste (Apr 11, 2017)

iheartuber said:


> Anyone who says robot cars are safer than human driven cars or who says they're going to be here quickly because "progress is happening" is really just a greedy SOB who is thinking he is going to make some money from this and really doesn't give a rip about society.


Anyone wanting to stop progress because it would force them to get a real job is the true SOB. 40k deaths a year? pffffft.


----------



## jocker12 (May 11, 2017)

Fatality rate going down










1.17 fatalities for 100 million miles driven at a 15.000 miles per year rate is *1 fatality every 5.698 years.*

Enforce seat belt use and IMMEDIATELY cut that in half. How easy is that? - 1 fatality every 10.000 years!


----------



## iheartuber (Oct 31, 2015)

tomatopaste said:


> Anyone wanting to stop progress because it would force them to get a real job is the true SOB. 40k deaths a year? pffffft.


I'm not trying to stop progress. I'm just saying progress is going to take the appropriate amount of time. Not "within weeks" like you claim. Don't believe me? Just wait "a few weeks" and see for yourself.


----------



## tomatopaste (Apr 11, 2017)

jocker12 said:


> Fatality rate going down
> 
> 
> 
> ...














jocker12 said:


> Fatality rate going down
> 
> 
> 
> ...



The number of vehicle miles traveled on U.S. roads in 2016 increased by 2.2 percent, and resulted in a fatality rate of 1.18 deaths per 100 million VMT - a 2.6-percent increase from the previous year.
https://www.nhtsa.gov/press-releases/usdot-releases-2016-fatal-traffic-crash-data


----------



## jocker12 (May 11, 2017)

Red dots are by 2017 data. Now 2017 has a lower rate than 2016. 

Enforce seat belt use, immediately cut fatalities in half and KILL the robots. Easy fix.


----------



## tomatopaste (Apr 11, 2017)

jocker12 said:


> Red dots are by 2017 data. Now 2017 has a lower rate than 2016.
> 
> Enforce seat belt use, immediately cut fatalities in half and KILL the robots. Easy fix.


People are able to survive accidents today than in the past due to passive restraint systems like airbags, seatbelts, crumple zones, roll cages, etc. But this does not tell the entire story. Just because you're able to survive the crash does not mean all is well.

Every year the lives of more than 1.25 million people are cut short as a result of a road traffic crash. Between 20 and 50 million more people suffer non-fatal injuries, with many incurring a disability as a result of their injury.
Road traffic injuries cause considerable economic losses to individuals, their families, and to nations as a whole. These losses arise from the cost of treatment as well as lost productivity for those killed or disabled by their injuries, and for family members who need to take time off work or school to care for the injured. Road traffic crashes cost most countries 3% of their gross domestic product. This is staggering.
http://www.who.int/news-room/fact-sheets/detail/road-traffic-injuries


----------



## jocker12 (May 11, 2017)

It is a good thing we are insisting on this because the more people will understand the reality the more will understand the 40.000 deaths fallacy.

The best way to save liver right away is to enforce seatbelt use. Mandating car manufacturers to connect seatbelt buckles to the engine ignition will cut fatalities in half. It is easy and much much cheaper than self driving cars hallucination.

Easy, effective and tolerable by the general pubic.


----------



## iheartuber (Oct 31, 2015)

jocker12 said:


> It is a good thing we are insisting on the because the more people will understand the reality the more will understand the 40.000 deaths fallacy.
> 
> The best way to save liver right away is to enforce seatbelt use. Mandating car manufacturers to connect seatbelt buckles to the engine ignition will cut fatalities in half. It is easy and much much cheaper than self driving cars hallucination.
> 
> Easy, effective and tolerable by the general pubic.


The Tomato and his cabal doesn't give a rip about safety.

They just want to see their fantasy come true so they can line their pockets.


----------



## jocker12 (May 11, 2017)

iheartuber said:


> The Tomato and his cabal doesn't give a rip about safety.
> 
> They just want to see their fantasy come true so they can line their pockets.


Of course is great for people to understand we are talking about realistic ways to save lives, while SDC enthusiasts, the moment they run out of delusions, deflect the topic towards affected profits and money.


----------



## iheartuber (Oct 31, 2015)

jocker12 said:


> Of course is great for people to understand we are talking about realistic ways to save lives, while SDC enthusiast, the moment they run out of delusions, deflect the topic towards affected profits and money.


RamzFanz is an SDC enthusiast

The Tomato is an SDC employee


----------



## heynow321 (Sep 3, 2015)

If it ultimately categorized her as a bicycle, why the **** did it not stop? Is it acceptable to run over bicycles now?


----------



## tomatopaste (Apr 11, 2017)

jocker12 said:


> Mandating car manufacturers to connect seatbelt buckles to the engine ignition will cut fatalities in half.


And how exactly does that work when 90% are already using seatbelts?

https://en.wikipedia.org/wiki/Seat_belt_use_rates_in_the_United_States
And why isn't iheartuber all up in your chili about people's rights and dictatorships and such?


----------



## iheartuber (Oct 31, 2015)

tomatopaste said:


> And how exactly does that work when 90% are already using seatbelts?
> 
> https://en.wikipedia.org/wiki/Seat_belt_use_rates_in_the_United_States
> And why isn't iheartuber all up in your chili about people's rights and dictatorships and such?
> ...


The Tomato Cabal is the only group of dictators here on UP


----------



## jocker12 (May 11, 2017)

heynow321 said:


> If it ultimately categorized her as a bicycle, why the &%[email protected]!* did it not stop? Is it acceptable to run over bicycles now?


Imo it's the computer vision image classification. In my above comment #16 you can see how the neural networks always have an error rate no matter how clear, big or close the object in the image is.

The key element in this situation was how quickly the software changed the object classification in matter of seconds (from unknown to car to bicycle), while the car was maintaining course towards that object.

"In Elaine Herzberg's death, imo was a combination of poor lightning (like this preliminary report states) and her running to cross the street in front of the car (*she definitely knew the vehicle was coming* because in the dark, no matter which direction you look at the human eye level, you will notice the lights and the shadows created around you). The report mentions how the software first identified her as an UNKNOWN OBJECT, then as a car, and then as a bicycle. From Figure 2 we see the car detected Elaine from 25 meters away, and at 43 mph the vehicle was covering 19.22 meters per second, so it had less than 1.5 seconds to react but never decided to try an evasive maneuver to avoid collision."


----------



## tomatopaste (Apr 11, 2017)

heynow321 said:


> If it ultimately categorized her as a bicycle, why the &%[email protected]!* did it not stop? Is it acceptable to run over bicycles now?


Because Uber disconected the emergency braking capability on the Volvo to give it a smoother ride for the pax they were already charging. They left the emergency braking to the human backup driver even though the system had no way of alerting the human driver of the possible false positive.


----------



## iheartuber (Oct 31, 2015)

tomatopaste said:


> Because Uber disconected the emergency braking capability on the Volvo to give it a smoother ride


So you're saying that robo cars with the proper amount of safety equipment gives pax a less than smooth ride?


----------



## heynow321 (Sep 3, 2015)

jocker12 said:


> Imo it's the computer vision image classification. In my above comment #16 you can see how the neural networks always have an error rate no matter how clear, big or close the object in the image is.
> 
> The key element in this situation was how quickly the software changed the object classification in matter of seconds (from unknown to car to bicycle), while the car was maintaining course towards that object.
> 
> "In Elaine Herzberg's death, imo was a combination of poor lightning (like this preliminary report states) and her running to cross the street in front of the car (*she definitely knew the vehicle was coming* because in the dark, no matter which direction you look at the human eye level, you will notice the lights and the shadows created around you). The report mentions how the software first identified her as an UNKNOWN OBJECT, then as a car, and then as a bicycle. From Figure 2 we see the car detected Elaine from 25 meters away, and at 43 mph the vehicle was covering 19.22 meters per second, so it had less than 1.5 seconds to react but never decided to try an evasive maneuver to avoid collision."


That's a pretty god damn huge failing if it put the object in three different categories without slowing itself whatsoever. What a joke


----------



## jocker12 (May 11, 2017)

iheartuber said:


> So you're saying that robo cars with the proper amount of safety equipment gives pax a less than smooth ride?


About the braking systems.

I know some of the journalists made a mistake reporting what the troll says, that the ONLY braking left available was the braking done by the human monitor, but that is FALSE.

The report says -" The vehicle was *factory equipped* *with several advanced driver assistance functions by Volvo Cars*, the original manufacturer. The systems included a collision avoidance function with automatic emergency braking, known as City Safety, as well as functions for detecting driver alertness and road sign information. *All these Volvo functions are disabled when the test vehicle is operated in computer control* but are operational when the vehicle is operated in manual control."

Also "According to Uber, *emergency braking maneuvers are not enabled* while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator." I agree that is a little ambiguous if you are not familiar with what a self driving car system does in relation to the safety systems factory installed on the vehicles. The misleading term in this sentence is "emergency" because it makes you think emergency braking and braking are 2 different things, but they are not.

For their final report, it will be a great idea if NTSB will clarify this ambiguity that is creating confusion into understanding what caused this drama.

*The self driving software* (when the car is under computer control) *is designed to slow down or brake for obstacles* or turns *by applying the brakes. 
*
Edit* - *
If "According to data obtained from the self-driving system, the system first registered radar and LIDAR observations of the pedestrian about 6 seconds before impact, when the vehicle was traveling at *43 mph.*" and "The vehicle speed at impact was *39 mph*." while "The operator began braking less than a second *after the impact*." then the car was already slowing down but didn't have enough time and distance to complete the braking sequence.

However, I am amazed the NTSB people didn't find necessary to explain how that car decelerated from 43mph to impact speed of 39mph.


----------



## iheartuber (Oct 31, 2015)

jocker12 said:


> About the braking systems.
> 
> I know some of the journalists made a mistake reporting what the troll says, that the ONLY braking left available was the braking done by the human monitor, but that is FALSE.
> 
> ...


It does not appear to me that robots have the ability to slam on the brakes in emergency situations


----------



## tomatopaste (Apr 11, 2017)

jocker12 said:


> I know some of the journalists made a mistake reporting what the troll says, that the ONLY braking left available was the braking done by the human monitor, but that is FALSE.


This is what the troll said:
"Because Uber disconnected the emergency braking capability on the Volvo to give it a smoother ride for the pax they were already charging. They left the emergency braking to the human backup driver even though the system had no way of alerting the human driver of the possible false positive."

This is what the report said:
"1.3 seconds before the impact, the self-driving computer realized that it needed to make an emergency-braking maneuver to avoid a collision. But it did not. Why? Uber's software prevented its system from hitting the brakes if that action was expected to cause a deceleration of faster than 6.5 meters per second. That is to say, in an emergency, the computer could not brake."

Correct, Uber disconnected Volvo's emergency braking system and its own software didn't allow it to brake _if that action was expected to cause a deceleration of faster than 6.5 meters per second.
_
Why did Uber do this? Because they wanted to provide as smooth a ride as possible to paying passengers that Uber was already charging. They left it up to the human backup driver to handle emergency situations. This is what the troll said.


----------



## jocker12 (May 11, 2017)

iheartuber said:


> It does not appear to me that robots have the ability to slam on the brakes in emergency situations


That is correct because their vision and object detection capabilities (some lidar some camera determined), they can see further down the road (or they should've in this case) and decelerate (by braking) slowly. This article mentions one of the former test drivers/monitors saying - ""_Sometimes the car would brake because of steam coming up from a pothole in the ground_," but the speed from which this happened is not mentioned.

Emergency braking, in the realistic sitaution, is when the driver notices the car is not braking like it supposed/programmed do be doing, and needs to take control.


----------



## tomatopaste (Apr 11, 2017)

iheartuber said:


> The Tomato and his cabal doesn't give a rip about safety.
> 
> They just want to see their fantasy come true so they can line their pockets.


I can explain quite simply why I want self-driving cars:


They don't cause accidents.
Getting to where you want to go will be less expensive, less hassle.
You're not tied to one type of vehicle. You order the right vehicle for that trip. Sedan, SUV, truck, van.
You don't have to worry about parking and can use your garage for other things.
Down the road traffic is all but eliminated.
Time wasted behind the wheel is now productive time.
Almost everything you buy is delivered for free or almost free.
What possible reasons are you against self-driving cars? Other than you might have to get a real job?


----------



## iheartuber (Oct 31, 2015)

tomatopaste said:


> I can explain quite simply why I want self-driving cars:
> 
> 
> They don't cause accidents.
> ...


I've had many jobs in my life I'm sure I'll have many more before I pass this earth.

Take the job part out of the equation.

I'm not against robot cars, never said I was. You must be putting words in my mouth.

What I did say was

1) to use robot cars as a fleet of taxis in an attempt to eclipse the current taxi biz that Uber has presents a set of challenges that make it decades away

2) the full scope of your fantasy in which robo cars fully take over means that car ownership becomes obsolete and I just don't want to live in a world like that (I doubt anyone else would either)

3) I just don't like your arrogant vibe.

So I guess you could say it's not self driving cars I'm against it's you I'm against as a spokesman for self driving cars


----------



## tomatopaste (Apr 11, 2017)

iheartuber said:


> 3) I just don't like your arrogant vibe.


I've have other vibes, ya know


----------



## iheartuber (Oct 31, 2015)

tomatopaste said:


> I've have other vibes, ya know


I got nothing personal against you

I get it.
You're a company man

We may never know what you really think

You preach those talking points til your dying breath

That's admirable

But just remember: i can say "I can fly" all day long and that's still not gonna allow me to fly. Proceed with caution in your life my friend.


----------



## Taxi2Uber (Jul 21, 2017)

jocker12 said:


> However, I am amazed the NTSB people didn't find necessary to explain how that car decelerated from 43mph to impact speed of 39mph.


My thought is the computer control disengaged to manual control once the operator handled the steering wheel less than a second before impact, causing the car to coast, reducing its speed slightly at impact.


jocker12 said:


> *The safety monitor had methamphetamine and marijuana in her system.*


This is incorrect. You should edit this. 
The report states "Toxicology test results for the pedestrian were positive for methamphetamine and marijuana."

------------------------------------------------------
So "the self-driving system determined that an emergency braking maneuver was needed" but is designed to not hard brake? And "The system is not designed to alert the operator." HUH?
Not even a beep, like a Lane Departure Warning would.
Once the SDC detected an object, a simple warning beep to alert the safety operator to at least, LOOK UP!



tomatopaste said:


> They [SDC] don't cause accidents.


The big problem I have is that, while SDC may not be found at fault in an accident, there will be in increase in otherwise avoidable accidents.


----------



## iheartuber (Oct 31, 2015)

Taxi2Uber said:


> The big problem I have is that, while SDC may not be found at fault in an accident, there will be in increase in otherwise avoidable accidents.


There seems to be a handful of "catch-22" scenarios where robots get into accidents that a human in the same situation would not.

On the one hand not really that many but on the other hand robots are "supposed" to have "zero"
Accidents


----------



## jocker12 (May 11, 2017)

Taxi2Uber said:


> My thought is the computer control disengaged to manual control once the operator handled the steering wheel less than a second before impact, causing the car to coast, reducing its speed slightly at impact.


I hear you, but unfortunately the report, Uber or the human monitor don't make it clear how the disengagement takes place - by moving the steering wheel, by touching the brake pedal, by hitting a button inside the vehicle, or by doing something else.

The deceleration is too significant (43mph to 39mph) in less than a second to convince me it was done by not engaging the brakes. Unless the vehicle was going up the hill (and it was not) or against strong headwind (was not either) is difficult to make me believe how that relatively new and well maintained SUV lost inertia/momentum at 4mph per second (so it would've completely stopped in 11seconds). I don't know.... NTSB preliminary report is completely ignoring this detail.



Taxi2Uber said:


> This is incorrect. You should edit this.
> The report states "Toxicology test results for the pedestrian were positive for methamphetamine and marijuana."


Correct my bad, I'll ask the mods to do it. Thank you for the heads up.


----------



## Rakos (Sep 2, 2014)

tomatopaste said:


> Nope. The weak link in all the so-called "driverless" deaths has been the human. Anytime someone dies in a Tesla with Autopilot engaged, Tesla blames the human. The only "self-driving" death other than in a Tesla was the Uber crash. What do both of these companies have in common? Both rely on investor cash to keep the lights on and both are cutting corners. Uber reduced the number of lidar sensors from 7 to only 1 and the number of backup drivers from 2 to 1. They also had no way to warn the human backup driver when the system sees a potential false positive. This is unconscionable.
> 
> Waymo drives almost a million miles a month on everyday roads with their self-driving cars and not so much as a fender bender.


You are wrong about Waymo's record...

They are having collisions too...

https://www.theverge.com/2018/5/4/17320936/waymo-self-driving-car-crash-arizona

Isaac Asimov spelled out...

The three primary rules for robots...


The First Law of Robotics: A robot may not injure a human being *or, through inaction, allow a human being to come to harm.

Major FAIL!!!*

The Second Law of Robotics: A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

The Third Law of Robotics: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

I worked in robotics R & D...

This rule was ALWAYS followed...

They need to read their history books...

Rakos


----------



## jocker12 (May 11, 2017)

Rakos said:


> You are wrong about Waymo's record...
> 
> They are having collisions too...
> 
> ...


He knows about it and accepts Waymo is a failure
https://uberpeople.net/threads/inside-google-everyone-drinks-the-kool-aid.260870/#post-3960208


----------



## Taxi2Uber (Jul 21, 2017)

jocker12 said:


> I hear you, but unfortunately the report, Uber or the human monitor don't make it clear how the disengagement takes place - by moving the steering wheel, by touching the brake pedal, by hitting a button inside the vehicle, or by doing something else.
> 
> The deceleration is too significant (43mph to 39mph) in less than a second to convince me it was done by not engaging the brakes. Unless the vehicle was going up the hill (and it was not) or against strong headwind (was not either) is difficult to make me believe how that relatively new and well maintained SUV lost inertia/momentum at 4mph per second (so it would've completely stopped in 11seconds). I don't know.... NTSB preliminary report is completely ignoring this detail.


It would be interesting to find out the reasons for the speed decrease. Hopefully the final report will address it.

In the meanwhile, we can speculate.
So the speed is not an exacting description of the number. Is 39 mph actually 39.9 making deceleration closer to 3?

The Volvo was a hybrid, I believe. Deceleration is increased due to regenerative braking (if equipped) compared to "standard" car.
(I unscientifically tested on a Prius cruising at 43 mph. I counted "one one-thousand" and the mph dropped 1, maybe 2. I tested again using the "B" selector and the mph dropped 2, maybe three)
Did the Volvo have/used a similar setting? Is the regenerative drag greater than a Prius?

When do the investigators consider impact? At first touch? Or at full resistance of 150 lbs of bike, bags, and person?
That has to account for 1 or a fraction of an mph.

The operator pulled the steering wheel slightly to the right. Does that account for a fraction of an mph?

Total all that, and 3-4 mph decrease is plausible, I suppose.


----------



## jocker12 (May 11, 2017)

Taxi2Uber said:


> It would be interesting to find out the reasons for the speed decrease. Hopefully the final report will address it.
> 
> In the meanwhile, we can speculate.
> So the speed is not an exacting description of the number. Is 39 mph actually 39.9 making deceleration closer to 3?
> ...


You are entirely correct and taking control over the car could be as easy as moving the steering wheel, as shown in many videos where monitors are holding their hands very close to the steering wheel on the lower side of it.

However, the preliminary report states "The self-driving system data showed that the vehicle operator intervened *less than a second before the impact* by engaging the steering wheel" (page 3 right under figure 2). IMO that deceleration is way too abrupt to be caused by cumulated drag, tire-asphalt friction and regenerative braking (very correct observation, but this picture shows no hybrid XC90 is used - vehicle not having this badge that clearly states "twin engine"). I admit NTSB mentions how braking was done only by the human monitor after the impact, and to me, that is a big question mark.

When it comes to numeric values, I think they will round it up to the closest full number value (39.9 reported as 40 and 39.4 reported as 39), but I don't know their exact methodology.


----------

