see attachment

Are Self-Driving Cars Ready for the Road? Case Study

Will cars really be able to drive themselves without human operators? Should they? And are they good business investments? Everyone is searching for answers.

Autonomous vehicle technology has reached a point where no automaker can ignore it. Every major auto maker is racing to develop and perfect autonomous vehicles, believing that the market for them could one day reach trillions of dollars. Companies such as Ford, General Motors, Nissan, Mercedes, Tesla, and others have invested billions in autonomous technology research and development. GM bought a self-driving car startup called Cruise. Ride-hailing companies like Uber and Lyft believe driverless cars that eliminate labor costs are key to their long-term profitability. (A study conducted by UBS shows that the cost per mile of a self-driving “robo-taxi” will be about 80 percent less than that of a traditional taxi.) Cars that drive themselves have been on the road in select locations in California, Arizona, Michigan, Paris, London, Singapore, and Beijing. Marketing firm ABI predicts that roughly 8 million vehicles with some level of self-driving capabilities will be shipped in 2025. In December 2018, Waymo,a subsidiary of Google Alphabet, launched a commercial self-driving taxi service called “Waymo One” in the Phoenix metropolitan area. A car that is supposed to take over driving from a human requires a powerful computer system that must process and analyze large amounts of data generated by myriad sensors, cameras, and other devices to control and adjust steering, accelerating, and braking in response to real-time conditions. Key technologies include:

Sensors: Self-driving cars are loaded with sensors of many different types. Sensors on car wheels measure the car’s velocity as it drives and moves through traffic. Ultrasonic sensors measure and track positions of line curbs, sidewalks, and objects close to the car.

Cameras: Cameras are needed for spotting things like lane lines on the highway, speed signs, and traffic lights. Windshield-mounted cameras create a 3-D image of the road ahead. Cameras behind the rearview mirror focus on lane markings. Infrared cameras pick up infrared beams emitted from headlamps to extend vision for night driving.

Lidars: Lidars are light detection and ranging devices that sit on top of most self-driving cars. A lidar fires out millions of laser beams every second, measuring how long they take to bounce back. The lidar takes in a 360-degree view of a car’s surroundings, identifying nearby objects with an accuracy up to 2 centimeters. Lidars are very expensive and not yet robust enough for a life of potholes, extreme temperatures, rain, or snow.

GPS: A global positioning system (GPS) pinpoints the car’s macro location, and is accurate to within 1.9 meters. Combined with reading from tachometers, gyroscopes, and altimeters, it provides initial positioning.

Radar: Radar bounces radio waves off objects to help see a car’s surroundings, including blind spots, and is especially helpful for spotting big metallic objects, such as other vehicles.

Computer: All the data generated by these technologies needs to be combined, analyzed, and turned into a robot-friendly picture of the world, with instructions on how to move through it, requiring almost supercomputer-like processing power. Its software features obstacle avoidance algorithms, predictive modeling, and “smart” object discrimination (for example, knowing the difference between a bicycle and a motorcycle) to help the vehicle follow traffic rules and navigate obstacles.

Machine Learning, Deep Learning, and Computer Vision Technology: The car’s computer system has to be “trained” using machine intelligence and deep learning to do things like detect lane lines and identify cyclists, by showing it millions of examples of the subject at hand. Because the world is too complex to write a rule for every possible scenario, cars must be able to “learn” from experience and figure out how to navigate on their own.

Maps: Before an autonomous car takes to the streets, its developers use cameras and lidars to map its territory in extreme detail. That information helps the car verify its sensor readings, and it is key for any vehicle to know its own location.

Self-driving car companies are notorious for overhyping their progress. Should we believe them? At this point, the outlook for them is clouded.

In March 2018, a self-driving Uber Volvo XC90 operating in autonomous mode struck and killed a woman in Tempe, Arizona. Uber suspended autonomous vehicle testing for a period of time. Even before the accident, Uber’s self-driving cars were having trouble driving through construction zones and next to tall vehicles like big truck rigs. Uber’s drivers had to intervene far more frequently than drivers in other autonomous car projects.

The Uber accident raised questions about whether autonomous vehicles were even ready to be tested on public roads and how regulators should deal with this. Autonomous vehicle technology’s defenders pointed out that nearly 40,000 people die on U.S. roads every year, and human error causes more than 90 percent of crashes. But no matter how quickly self-driving proliferates, it will be a long time before the robots can put a serious dent in those numbers and convince everyday folks that they’re better off letting the cars do the driving. Uber has revised its approach to autonomous driving and plans to launch its self-driving cars in pockets of cities where weather, demand, and other conditions are most favorable. While proponents of self-driving cars like Tesla’s Elon Musk envision a self-driving world where almost all traffic accidents would be eliminated, and older adults and those with disabilities could travel freely, most Americans think otherwise. A Pew Research Center survey found that most people did not want to ride in self-driving cars and were unsure if they would make roads more dangerous or safer. Eighty-seven percent wanted a person always behind the wheel, ready to take over if something went wrong.

There’s still plenty that needs to be improved before self-driving vehicles could safely take to the road. Autonomous vehicles are not yet able to operate safely in all weather conditions. Heavy rain or snow can confuse current car radar and lidar systems—­autonomous vehicles can’t operate on their own in such weather conditions. These vehicles also have trouble when tree branches hang too low or bridges and roads have faint lane markings. On some roads, self-driving vehicles will have to make guidance decisions without the benefit of white lines or clear demarcations at the edge of the road, including Botts’ Dots (small plastic markers that define lanes). Botts’ Dots are not believed to be effective lane-marking for autonomous vehicles.

Computer vision systems are able to reliably recognize objects. What remains challenging is “scene understanding”—for example, the ability to determine whether a bag on the road is empty or is hiding bricks or heavy objects inside. Although autonomous vehicle vision systems are now capable of picking out traffic lights reliably, they are not always able to make correct decisions if traffic lights are not working. This requires experience, intuition, and knowing how to cooperate among multiple vehicles. Autonomous vehicles must also be able to recognize a person moving alongside a road, determine whether that person is riding a bicycle, and predict how that person is likely to respond and behave. All of that is still difficult for an autonomous vehicle to do right now. Chaotic environments such as congested streets teeming with cars, pedestrians, and cyclists are especially difficult for self-driving cars to navigate.

Driving a car to merge into rapidly flowing lanes of traffic is an intricate task that often requires eye contact with oncoming drivers. How can autonomous vehicles communicate with humans and other machines to let them know what they want to do? Researchers are investigating whether electronic signs and car-to-car communication systems would solve this problem. There’s also what’s called the “trolley problem”: In a situation where a crash is unavoidable, how does a robot car decide whom or what to hit? Should it hit the car coming up on its left or a tree on the side of the road?

Less advanced versions of autonomous vehicle technology are already on the market. No current production car in the United States can drive while you sleep, read, or tweet, but many systems can maintain following distance with the vehicle ahead or keep your car centered in its lane, even down to a stop in bumper-to-bumper traffic. In some cases these systems allow the “driver” behind the wheel to take hands off the wheel, provided that person keeps paying attention and is ready to take control if needed.

These less-advanced systems can’t see things like stopped fire trucks or traffic lights. But humans haven’t made good driving backups because their attention tends to wander. At least two Tesla drivers in the United States have died using the system. (One hit a truck in 2016, another hit a highway barrier in 2018.) There is what is called a “handoff problem.” A semiautonomous car needs to be able to determine what its human “driver” is doing and how to get that person to take the wheel when needed.

And let’s not forget security. A self-driving car is essentially a collection of networked computers and sensors linked wirelessly to the outside world, and it is no more secure than other networked systems. Keeping systems safe from intruders who want to crash or weaponize cars may prove to be the greatest challenge confronting autonomous vehicles in the future.

A computer-driven car that can handle any situation as well as a human under all conditions is decades away at best. Researchers at Cleveland State University estimate that only 10 to 30 percent of all vehicles will be fully self-driving by 2030. PwC analysts estimate that 12 percent of all vehicles will be fully autonomous by then, but they will only work in geographically constrined areas under good weather conditions, as does Waymo’s fleet of self-driving vans in Phoenix. Truly autonomous cars are still science fiction.

What is more likely is that self-driving technology will be incorporated into human-driven cars. Current auto models are being equipped with technologies such as advanced object recognition, radar-and-laser detection, some capability to take control of driving if the driver has made a mistake, and ultradetailed highway maps that were originally developed for self-driving vehicles. By 2022, nearly all new vehicles in the United States will have automatic emergency braking, which reduces rear-end crashes by 50 percent and crashes with injuries by 56 percent. Once emergency braking technology has been fully deployed, it could reduce fatalities and injuries from rear-end crashes by 80 percent. Human-driven vehicles with some level of self-driving technology will become safer at a rate that completely autonomous vehicles may have trouble matching. This makes the need for fully self-driving cars less compelling.

Many analysts expect the first deployment of self-driving technology will be robot taxi services operating in limited conditions and areas, so their operators can avoid particularly tricky intersections and make sure everything is mapped in fine detail. The Boston Consulting Group predicts that 25 percent of all miles driven in the United States by 2030 may be by shared self-driving vehicles. To take a ride, you’d probably have to use predetermined pickup and drop-off points, so your car can always pull over safely and legally. The makers of self-driving cars will be figuring out how much to charge so they can recoup their research and development costs, but not so much as to dissuade potential riders. They’ll struggle with regulators and insurance companies over what to do in the inevitable event of a crash.