# All the things that still baffle self-driving cars, starting with seagulls



## jocker12 (May 11, 2017)

To watch a self-driving car park itself seems like magic. Pull back the curtain, it's a lot messier. Cars mistake snowflakes for obstacles, lose lane markings, and miss cars on the side of the road.

Engineers are racing to make cars perform better than humans, with the aim of saving millions of lives each year. Human error is to blame for 94% (pdf) of annual US traffic fatalities, according to the US National Center for Statistics and Analysis. Autonomous vehicles promise to prevent most of them. Even today's off-the-shelf features, such as lane departure alerts (now widely available), could cut fatal crash rates by 86%, estimates the Insurance Institute for Highway Safety (IIHS).

Yet we're still a long way from "self-driving," despite marketing to the contrary. Driver-assist technologies capable of steering, braking and following traffic rules (with human oversight) are now entering the market, led by Tesla. Yet "it's important to note that none of these vehicles is capable of driving safely on its own," says David Zuby, IIHS chief research officer. "A production autonomous vehicle that can go anywhere, anytime isn't available at your local car dealer and won't be for quite some time." IIHS tested five of the leading brands' systems on track tests over hills and curves. While none crashed, almost all of them missed the mark multiple times by crossing or touching lane lines, or disengaging during driving.


















That's still impressive, and machine learning continues to revolutionize what's possible. Uber and Alphabet's Waymo are ferrying passengers in self-driving vehicles (with safety drivers) in cities from Pittsburgh, Pennsylvania, to Phoenix, Arizona. Yet the first fully self-driving cars may come first to retirement homes, corporate campuses and private communities: controlled environments where computers can easily map their world. "I challenge any car company to drive through a complex urban environment without a diver under any weather conditions,'" says of Ryan Chin, co-founder and CEO of Optimus Ride, which reportedly has a dozen or so campuses and communities ready to pilot its self-driving technology. "We're not there yet as an industry. Even the best systems aren't there yet."

What fools today's semi-autonomous cars? Raindrops and obstacles, and even masking tape and seagulls, all throw algorithms for a loop. Quartz assembled some of the most prominent challenges for self-driving cars below.

*Altered stop signs:*
Computer science researchers subtly altered (pdf) stop signs to see if minor alterations could confuse self-driving cameras, even if a human driver might miss the change. Fake graffiti caused algorithms to misidentify the stop sign as a speed limit sign two-thirds of the time, while applying random tape, called an "abstract art sticker attack" by the researchers, resulted in the miscategorization 100% of the time."










*Falling snowflakes*
Snowflakes and raindrops are notorious for scattering sensors' signals. They can create the illusion that obstacles exist all the way around a vehicle. Algorithms are getting better at using lasers to paint a high-resolution 3D map of the environment to differentiate between H20 and solid objects, but winter remains one of self-driving cars' biggest challenges. Snow blurs where computers perceived the road to start, and alter traction for tires. "In a lot of [cold and temperate] regions, it's going to be a lot longer before we see autonomous vehicles than some people would like you to believe," says Sam Abuelsamid of Navigant research. "You're not going to have autonomous vehicles running around Toronto in the wintertime in 2020."

*Seagulls:*
Birds, too, can confound computers. In Boston, NuTonomy had to reprogram its cars to disperse stubborn seagulls. "For the local breed of unflappable seagulls-which can stop autonomous cars by simply standing on the street, unbothered by NuTonomy's quiet electric cars-engineers programmed the machines to creep forward slightly to startle the birds," reports Bloomberg.

*Foam*
Researchers at the University of South Carolina disoriented a Tesla S by covering obstacles in sound-dampening foam so ultrasonic sensors did not detect them. Similarly, $40 worth of Arduino computers and an ultrasonic transducer (for generating sound waves) could trick a Tesla into avoiding a parking spot, or jam ultrasonic sensors to miss actual obstacles at close range.

*Exiting vehicles:*
Cars orient themselves using other cars. That's fine at higher speeds on the highway, but may lead to an unexpected swerve as cars begin to follow another car onto off-ramps. "When a car is traveling too slow to track lane lines, active lane-keeping systems use the vehicle in front as a guide," IIHS states. "If the lead vehicle exits, the trailing car might, too."

*Hills*
IIHS test drivers in the hills of Central Virginia found even advanced driver assistance systems could miss lane markings as vehicles crested hills. Without visibility ahead, cars swerved left and right to find the center of the lane, alarming drivers who were not warned to assume control of the vehicle.

*Bridges*
Bridges are a black-box for autonomous cars, reports Electronic Component News. Because bridges lack many of the environmental cues present on roads, they can prevent sensors from keeping the vehicle on track. The magazine compared it to "walking a straight line from one end to the other in a massive room, and the lights go out when you're halfway across. While you don't see anything, you have a general idea of the direction to continue, but are very susceptible to getting thrown off-course."

*Tree shadows*
Tesla's Model 3 made "unnecessary or overly cautious" braking maneuvers 12 times in 180 miles. Seven of those times were where trees cast shadows on the road, while the rest involved oncoming vehicles in another lane or crossing the road far ahead. "The braking events we observed didn't create unsafe conditions because the decelerations were mild and short enough that the vehicle didn't slow too much," IIHS says. "However, unnecessary braking could pose crash risks in heavy traffic, especially if it's more forceful. &#8230; Plus, drivers who feel that their car brakes erratically may choose not to use adaptive cruise control and would miss out on any safety benefit from the system."

https://qz.com/1397504/all-the-things-that-still-baffle-self-driving-cars-starting-with-seagulls/


----------



## HotUberMess (Feb 25, 2018)

SDCs use other cars to orient themselves? What happens when the test road is full of SDCs? When a few malfunction then they all malfunction?


----------



## jocker12 (May 11, 2017)

HotUberMess said:


> SDCs use other cars to orient themselves? What happens when the test road is full of SDCs? When a few malfunction then they all malfunction?


The software identifies the surrounding cars in motion, and navigates using the best path. Based on the surrounding objects motion speed and shape, the software can classify them accordingly as pedestrians, bicycles, motorcycles, scooters, horses, cows, and many more (as they present themselves and as they were part of the datasets used to train the image recognition software).

All the stationary surrounding objects are obstacles, and image recognition will be able to identify what they are and subsequently if they suddenly could make a move in the car's path and became active traffic participants.

I assume this is a major problem inside the cities, and a lesser issue on the highway (even though on the highway such a drastic change could have a more substantial impact). This posted article is specifically about Lidar sensors accuracy regarding *other traffic participants traveling speed*, but the title is misleading, making people to believe is about solving SDC's limited low speed problem.

from - Why Tesla's Autopilot Can't See a Stopped Firetruck
"Raj Rajkumar, who researches autonomous driving at Carnegie Mellon University, thinks those assumptions concern one of Tesla's key sensors. "The radars they use are apparently meant for detecting moving objects (as typically used in adaptive cruise control systems), and seem to be not very good in detecting stationary objects," he says.

That's not nearly as crazy as it may seem. Radar knows the speed of any object it sees, and is also simple, cheap, robust, and easy to build into a front bumper. But it also detects lots of things a car rolling down the highway needn't worry about, like overhead highway signs, loose hubcaps, or speed limit signs. *So engineers make a choice, telling the car to ignore these things and keep its eyes on the other cars on the road: They program the system to focus on the stuff that's moving.*

This unsettling compromise may be better than nothing, given evidence that these systems prevent other kinds of crashes and save lives. And it's not much of a problem if every human in a semi-autonomous vehicle followed the automakers' explicit, insistent instructions to pay attention at all times, and take back control* if they see a stationary vehicle up ahead.*

The long term solution is to combine a several sensors, with different abilities, with more computing power. Key amongst them is lidar. These sensors use lasers to build a precise, detailed map of the world around the car, and can easily distinguish between a hub cap and a cop car. The problem is that compared to radar, lidar is a young technology. It's still very expensive, and isn't robust enough to survive a life of hitting potholes and getting pelted with rain and snow. Just about everybody working on a fully self-driving system-the kind that doesn't depend on lazy, inattentive humans for support-plans to use lidar, along with radar and cameras."

To answer your question, if the software will supposedly work correctly and the cars are not communicating with each other, if few other cars malfunction, it will adjust to traffic conditions (cars slowing down or moving erratically in vehicle's path), and based on the position on the road related to the obstacles and the necessary time to potentially avoid a collision, it will perform as designed. In an ideal situation, with cars communicating with each other, probably the software it will have time to avoid any collisions or pull over to a full stop due to inoperable traffic conditions.

But this is only theory that was never demonstrated in real practice. Not even close.


----------



## goneubering (Aug 17, 2017)

jocker12 said:


> To watch a self-driving car park itself seems like magic. Pull back the curtain, it's a lot messier. Cars mistake snowflakes for obstacles, lose lane markings, and miss cars on the side of the road.
> 
> Engineers are racing to make cars perform better than humans, with the aim of saving millions of lives each year. Human error is to blame for 94% (pdf) of annual US traffic fatalities, according to the US National Center for Statistics and Analysis. Autonomous vehicles promise to prevent most of them. Even today's off-the-shelf features, such as lane departure alerts (now widely available), could cut fatal crash rates by 86%, estimates the Insurance Institute for Highway Safety (IIHS).
> 
> ...


Couldn't we just ban seagulls for the safety of humanity??!!


----------



## uberdriverfornow (Jan 10, 2016)

HotUberMess said:


> What happens when the test road is full of SDCs?


"when" ???


----------



## jocker12 (May 11, 2017)

How stupid AI really is?

Check it out

Burger King's AI-written ads are beautiful disasters - https://mashable.com/article/burger-king-ai-ads-beautiful-disaster/#rPyoA6IB0mqJ

"Burger King is releasing a series of ads that were apparently written by a deep-learning algorithm. The restaurant chain dubbed the project "Agency of Robots." 
One of the ads declares, "Gender reveal bad. Tender reveal young. It is a boy bird with crispy chicken tenders from Burger Thing"

"A commercial about their signature Whoppers says, "The whopper lives in a bun mansion just like you. Order yourself today ... Have it Uruguay"

In another, the AI decides that "Burger King's new chicken fries are the new potato." 
"We are not sorry," the robotic narration intones. "The potato deserved this."

"According to a press release, Burger King used "high-end computing resources and big data" to train an artificial neural network with "advanced pattern recognition capabilities." They fed thousands of fast food commercials to the AI, and it spit out phrases like "The chicken crossed the road to become a sandwich." 
Sure, it's not poetic, but the ads definitely get your attention"


----------



## heynow321 (Sep 3, 2015)

wow. Ai and machine learning are this bubbles buzzwords. What a joke


----------



## jocker12 (May 11, 2017)

heynow321 said:


> wow. Ai and machine learning are this bubbles buzzwords. What a joke


Yup, it's only computer vision text recognition. There is no thinking like grammar or logic into it. Only words together in a sub logical order, mimicking what the software learned from image data sets.

And people believe a software trained in similar ways can take over safely driving a car through the city. That is painfully naive.


----------



## goneubering (Aug 17, 2017)

jocker12 said:


> How stupid AI really is?
> 
> Check it out
> 
> ...


That's hilarious!!


----------



## Fuzzyelvis (Dec 7, 2014)

jocker12 said:


> To watch a self-driving car park itself seems like magic. Pull back the curtain, it's a lot messier. Cars mistake snowflakes for obstacles, lose lane markings, and miss cars on the side of the road.
> 
> Engineers are racing to make cars perform better than humans, with the aim of saving millions of lives each year. Human error is to blame for 94% (pdf) of annual US traffic fatalities, according to the US National Center for Statistics and Analysis. Autonomous vehicles promise to prevent most of them. Even today's off-the-shelf features, such as lane departure alerts (now widely available), could cut fatal crash rates by 86%, estimates the Insurance Institute for Highway Safety (IIHS).
> 
> ...


So to sum it up:

If all stop signs (and presumably other signs) are perfect, there are no seagulls around, no snowflakes or snow, no hills, bridges or tunnels, no tree shadows, no cars to blindly follow, and of course no asshole with a $40 transducer, SDCs are mostly ok?

But don't stand still. Hard to see you if you're not moving...


----------



## tohunt4me (Nov 23, 2015)

Seagulls Outmaneuver Technologicaly Wonderous ROBO CARS !

The Transhumanists are DOOMED !

Long before Solar Flares drive them extinct . . .

One good E.M.P. and the Amish will Rule the Country !


----------



## jocker12 (May 11, 2017)

goneubering said:


> That's hilarious!!


They need a new name for it - Artificial Stupidity.


----------



## jocker12 (May 11, 2017)

And

Google admits chatbots were a bad idea - https://www.fastcompany.com/9024377...tm_content=rss&utm_medium=feed&utm_source=rss

"Simulating a back-and-forth conversation was supposed to be more efficient than poking around in traditional apps, and chatbot proponents hyped this model as the future of software design. Google Assistant itself debuted as a feature within Google's Allo messaging app, so you could exchange text messages with the search giant just like you would with a friend.

"When we built the Assistant, you can clearly see inspiration from Allo in what we did, in this chatty back-and-forth model where you're talking with an intelligent assistant," says Chris Perry, the Google product manager who leads Assistant on Android. "And we found that was somewhat restrictive of a model for us. It ended up constraining us in a number of different ways."

I remember seeing some enthusiasts here claiming technology is linear and is the future. Yeah, only that real Google scientists call that "impotency".


----------

