Driverless cars will zip and hum on America’s streets sooner than you may realize. At least 18 automakers, including companies like Ford, GM, Toyota, and Tesla, have invested millions of dollars into autonomous vehicle (AV) research. Test vehicles currently operate in Pittsburgh and Phoenix, and it won’t be long before a fleet of driverless rideshare cars will hit the streets. Twenty-two states already have driverless car laws, and Congress will likely pass a federal law that both regulates and encourages self-driving vehicles.
What You Need to Know about Driverless Cars
To date, self-driving test cars have been in the spotlight due to vehicle accidents and at least one death on public roads. Other concerns lurk below the surface. As the number of driverless cars grows, computer glitches, cyber attacks, and issues related to integration with existing vehicles are a cause for concern.
Knowledge is power, so it’s important to understand the issues involved with autonomous vehicles – a catchall term used to describe vehicles where human occupants are simply passengers and need never be involved in driving.
Autonomous Vehicle Definitions
The federal National Highway Traffic Safety Administration (NHTSA) and driverless car developers have adopted the Society of Automotive Engineers International definitions for levels of driverless automation. This system outlines the methodical increase in automation until the ultimate driverless car is produced.
Levels of ADAS Further Defined:
Level 0 — The human driver does all the driving. Level 1 — A vehicle’s advanced driver assistance system (ADAS) can assist the human driver with steering or braking/accelerating. Level 2 — A vehicle’s ADAS can control both steering and braking/accelerating under some circumstances. The human driver must continue to monitor the driving environment at all times and perform other tasks associated with driving. Level 3 — A vehicle’s automated driving system (ADS) can perform all driving tasks under some circumstances. At those times, the human driver must be ready to take back control at the request of the ADS. Most of the time, though, a human driver is literally and figuratively behind the wheel. Level 4 — A vehicle’s ADS can perform all driving tasks and monitor the driving environment – essentially, do all the driving – in certain circumstances. The human driver doesn’t have to pay attention during those times. Level 5 — A vehicle’s ADS can do all the driving in all circumstances. The human occupants are just passengers and never need to be involved in driving.
The evolution of driverless cars is continuing at a steady pace. Recent models contain some autonomous components, such as lane assist and parking assist, which raises these vehicles to Level 2. Some developers plan to skip Level 3. An excellent 2017 Mashable list of timelines from manufacturers from Audi to Volvo reveals target dates for Level 4 vehicles (but not to Level 5).
The Inner Workings of Driverless Cars
At Level 5, sensors, GPS, and computers replace the human brain. The sensors gather and continually update information about the driving environment to the vehicle’s computerized ADS. In turn, the ADS’ decisions direct the car. In other words, you could sleep in the back seat while traveling from New York to Miami, presumably with a gentle voice awakening you at programmed pit stops.
According to a report in “The Drive,” an online automotive site, developers currently use five types of sensors to obtain information:
Ultrasonic sensors use sound waves to detect obstacles in their immediate vicinity. Typically used for parking, these sensors are considered fully developed and have no needed improvements.
Image sensors utilize several cameras, including 3-D models, to capture the environment. Image sensors can read signs and can serve as a back-up if another sensor fails. Refinements are needed to increase the distance to 250 meters, improve accuracy in certain weather conditions, and improve object recognition. For example, present image sensors recognize pedestrians only 95% of the time.
Radar sensors are located around the perimeter of a self-driving car and send out electromagnetic waves that bounce back from obstacles. These sensors can track the speed of other vehicles in real time, but refinements are needed to move from 2-D to 3-D sensors.
LiDAR sensors are harmless laser beams that detect the car’s environment. The most expensive of the sensors, the LiDAR could be improved by incorporating a flashing beam for greater accuracy.
The driverless car’s computer can also be improved in order to decrease the amount of electricity it requires. According to Cleantechia.com, BorgWarner engineers estimate that the computer’s energy requirements are equivalent to the energy required to simultaneously power 50-100 laptops. This electricity demand will seriously decrease the fuel efficiency and range of electric self-driving vehicles.
Without the development of a more power-efficient computer, the first fleets of Level 4 or Level 5 rideshare driverless cars will not be clean electric cars. Ford agrees. In October 2017, Jim Farley, Ford’s president of global markets, told investors, “If you are trying to maximize your utilization [of a driverless car, a battery electric car] is really restrictive for your business.” Farley said that Ford believes hybrids are “the right tech to start with.”
Four Major Traffic Safety Concerns About Autonomous Cars
These are the top four safety concerns for driverless cars:
Computer error, causing the ADS to fail. Maybe bad weather caused one of the sensors to mislabel a pedestrian as a trash can. Perhaps the GPS sent outdated information. Or it could be a garden-variety, who-knows-the-reason computer error. According to the U.S. Department of Labor, almost 600,000 computer repair technicians make their living from computer errors. Unlike an error on your laptop, an ADS error could cause a fatal accident.
The blending of driver controlled and driverless cars as autonomous vehicles face increasingly unusual and challenging situations. An Uber test car, programmed to obey all laws, stopped rather than speed onto the freeway, bringing traffic in the merge lane to a sudden halt. One tech expert suggested retrofitting self-driving cars with connected car technology to provide new safety features. The recent Governors Highway Safety Association (GHSA) Report “Autonomous Vehicles Meet Human Drivers: Traffic Issues for States, delivers practical advice to states as they grapple with issues associated with autonomous vehicles, urging them to embrace the technology maintain flexibility, and refrain from rushing into passing laws or establishing regulations.
Cyber-attacks. In 2017, cybercriminals stole the private information of 143 million Americans and ransomware hackers WannaCry and Petya attacked offices, hospitals and healthcare facilities, and global shipping. Some regard autonomous cars as the future’s most likely cybersecurity headache. An author of thriller films could easily outline six scary scenarios over lunch.
Ambiguous language for Level 4 driverless cars. According to the Mashable timeline list, Level 4 represents the realistic target for manufacturers of autonomous cars. The ambiguous language in Level 4’s description will likely create liability issues that will ultimately be resolved by the courts.
Unanswered Questions
“A vehicle’s ADS can perform all driving tasks and monitor the driving environment – essentially, do all the driving – in certain circumstances. The human driver doesn’t have to pay attention during those times.”
What defines “essentially”? “What are the “certain circumstances?” How does the human know what to do, and what not to do? And what are the consequences? Those questions remain unanswered, as does the question of how insurers will handle driverless car coverage and claims that arise from autonomous vehicles.
In September 2017, NPR’s “All Tech Considered” summarized the opinions of insurance and legal authorities: “The introduction of autonomous vehicles raises novel new issues regarding fault and liability which will be resolved by the courts on a case-by-case basis. This will create the data necessary for the insurance companies to set premiums which may, or may not, fall entirely on the manufacturers.”
Existing Driverless Car Regulations
Currently, there are not many driverless car regulations, but many interested parties, including manufacturers and developers, are prodding Congress to enact national standards. They fear that, in the absence of national public policy, states will pass their own laws and create an impossible patchwork of differing standards.
The House passed the Safely Ensuring Lives Future Deployment and Research In Vehicle Evolution Act, thankfully better known as the SELF DRIVE Act, in September 2017. In October, the Senate Commerce Committee approved its version, called the American Version for Safer Transportation through Advancement of Revolutionary Technologies Act or AV START. This bill is awaiting approval of the full Senate. If the Senate passes the bill, the differences between the House and Senate versions must be reconciled. The final bill must also be reconciled with the current Trump Administration policy, released in September by U.S. Transportation Secretary Elaine Chao, favoring voluntary safety guidelines rather than governmental regulations.
The two bills contain similar provisions, including:
States cannot pass their own laws regulating the performance of driverless cars
Privacy protection provided for users
Cybersecurity protection and notification requirements
Exemptions for current Federal Motor Vehicle Safety Standards, which apply to conventional cars
Notably, the Senate bill does not apply to autonomous trucks.
Pros, Cons, Cautions, and Controversies Surrounding Autonomous Cars
Generally, the technology and finance sectors love driverless cars. Consumer advocates, on the other hand, have concerns and reservations. For a complete and straightforward analysis of the future of autonomous vehicles, read “Autonomous Vehicle Implementation Predictions” released in December 2017 by the Victoria Transport Policy Institute. The Institute is an independent research organization dedicated to developing innovative and practical solutions to transportation problems.
The report notes, “The analysis indicates that some benefits, such as more independent mobility for affluent non-drivers, may begin in the 2020s or 2030s, but most impacts, including reduced traffic and parking congestion (and therefore infrastructure savings), independent mobility for low-income people (and therefore reduced need for public transit), increased safety, energy conservation and pollution reductions, will only be significant when autonomous vehicles become common and affordable, probably in the 2040s to 2060s.”
And the public generally does not support them at this time according to a compilation of five surveys attached as the appendix to the GHSA’s 2017 “Autonomous Vehicles Meet Human Drivers” report, the public does not support driverless cars at this time. The biggest concern expressed was the degree of vehicle automation. The Kelley Blue Book survey showed only 13% of respondents preferred a Level 5 vehicle and 80% wanted the driver to be able to take control of the car.
This hesitance was confirmed by a survey conducted by the American Automobile Association showing that 75% of Americans are afraid to ride in a self-driving car. But the AAA notes that 35,000 people die on American roads each year, primarily because of human error. The organization believes that safely tested and deployed autonomous cars can dramatically reduce this number.
In 2017, the global consulting firm Deloitte updated its 2014 report on autonomous cars. “What’s Ahead for Fully Autonomous Driving” contains the marks a manufacturer must meet to draw customers to its Level 4 or Level 5 product. It compares automotive consumers in six countries and found that, in the U.S., 74% of consumers felt that fully self-driving vehicles would not be safe. However, manufacturers should be encouraged by other findings related to U.S. consumers:
47 % would trust existing car manufacturers compared to 20% for tech companies
68% want an established track record for a safe fully self-driving car
54% are more likely to buy from a brand they trust
The road to success is paved with safety. The top four tech features desired by U.S. consumers relate to safety:
Recognizes objects on road and avoids a collision
Informs driver of dangerous driving situations
Blocks driver from dangerous driving situations
Takes steps in a medical emergency or accident.
Major Players on the U.S. Autonomous Car Superhighway
Major Players
The Car Manufacturers — They have the capacity, the technology, the experience, the brand names, and the sales forces. But can they deliver a truly safe, affordably-priced car?
The Consumer — The consumer is in the driver’s seat and has a show-me attitude. They have their doubts about driverless cars and want visible proof, not promises. They remember the Corvair and the Pinto. They will not get in a car that has not been proven safe.
The Federal Government — Congress wants the feds to be in control, but want to withhold regulations that might slow development. Will the feds understand that the consumers will not gamble their lives by riding in a car not proven to be safe?
The Courts — The courts will eventually decide who is responsible when something goes awry, and a driverless car causes death, injury, and property damage. They may also decide issues such as what constitutes a safe driverless car and where these cars may operate.
Minor Players
States — Congress will likely strip states of the power to regulate driverless cars.
Insurance companies — Insurers’ importance will increase as data dictates what should be insured, who should bear losses, and how much insurance should cost.
Tech companies and rideshares — Both are necessary and important, but they are the chorus rather than the soloists.
Defining Success for the Autonomous Car
To move from the hypothetical to the practical, manufacturers must build cars that educated consumers believe are safe. There are many hurdles to jump before that scenario becomes a reality.
UPDATE:
In what is likely to be the first lawsuit regarding an accident with a fully autonomous vehicle, Lemberg Law sues GM on behalf of its client for negligence.
March 19, 2018. On February 24, 2018, just one month after Mr. Nilsson’s case was filed, an Uber self-driving car collided with another vehicle in Pittsburgh, Pennsylvania. No one was seriously injured, but the incident caused serious damage to both vehicles. The driver of the other vehicle involved in the accident is a mother of three children.
Now, only a month later, the first confirmed fatality committed by a self-driving car has been reported. On March 18, 2018, another self-driving Uber vehicle struck and killed a pedestrian in Tempe, Arizona. The vehicle was in autonomous mode when it struck the woman, who was crossing the street. The pedestrian, identified by police as 49-year-old Elaine Herzberg, was rushed to a hospital, where she died of her injuries.
Hours after the crash, Uber announced the suspension of all tests of its autonomous vehicles in Pittsburgh, Phoenix, San Francisco and Toronto, but it is unlikely that these will be the last incidents caused by self-driving cars.
About the Author:
Sergei Lemberg is a lawyer whose practice focuses on consumer law, class actions and personal injury litigation. He has been repeatedly recognized as the “most active consumer attorney” in the country. In 2020, Mr. Lemberg represented Noah Duguid in the United States Supreme Court in the case entitled Duguid v. Facebook. He is the author of Defanging Debt Collectors, a book that teaches consumers how to battle debt collectors and win.
Sorry, but this misses the most important feature that all autonomous vehicles rely on, and their main weakness. While computers driving vehicles do use lidar and radar, they are impossible to use to recognize objects because they are colorless and textureless pinpoints of sensor distance data, that at best can turn the surrounding environment into lines of spinning slices. They also do use cameras for full color vision, but that is much slower than one would imagine. Vision camera produce a series of two dimensional arrays called a frame. Humans see faster than 32 frames per second. But single processor computers are at least 32 times slower just to recognize some object in each frame, and much slower that even that to difference between frames to extrapolate motion. Why are humans so much better at vision? Because we have over 100 billion parallel processors. Can computers compete? No, because the way computers do parallel processors now is with cores that share a memory bus bottleneck and are useless for things like image processing. Even our biggest supercomputers only have a thousand processors and are way too big and expensive for any vehicle. So then how do autonomous vehicles drive? Well they just use GPS. They don’t and can’t see the lanes, and are just guessing based on mapping data. Is that bad? YES! Because GPS mapping is horribly unreliable. Not only is it out of date, but it goes down completely every 11 years when solar maxima interferes with satellites.
Sorry, but this misses the most important feature that all autonomous vehicles rely on, and their main weakness. While computers driving vehicles do use lidar and radar, they are impossible to use to recognize objects because they are colorless and textureless pinpoints of sensor distance data, that at best can turn the surrounding environment into lines of spinning slices. They also do use cameras for full color vision, but that is much slower than one would imagine. Vision camera produce a series of two dimensional arrays called a frame. Humans see faster than 32 frames per second. But single processor computers are at least 32 times slower just to recognize some object in each frame, and much slower that even that to difference between frames to extrapolate motion. Why are humans so much better at vision? Because we have over 100 billion parallel processors. Can computers compete? No, because the way computers do parallel processors now is with cores that share a memory bus bottleneck and are useless for things like image processing. Even our biggest supercomputers only have a thousand processors and are way too big and expensive for any vehicle.
So then how do autonomous vehicles drive? Well they just use GPS. They don’t and can’t see the lanes, and are just guessing based on mapping data. Is that bad? YES! Because GPS mapping is horribly unreliable. Not only is it out of date, but it goes down completely every 11 years when solar maxima interferes with satellites.