Is the function of self-driving cars really not as powerful as advertised?

Skeptics say that fully autonomous driving may be more remote than imagined, but the industry does not want to admit it.

If you really believe what the CEOs say, fully autonomous cars may be available in a few months. In 2015, Elon Musk predicted that the fully autonomous Tesla will be on the market in 2018; the same is true for Google. The L4 system of Delphi and MobileEye is currently scheduled to be launched in 2019. In the same year, Nutonomy plans to deploy thousands of driverless taxis on the streets of Singapore. General Motors (GM) will mass-produce fully self-driving cars in 2019, without a steering wheel, and the driver will not be able to intervene. These predictions are supported by real money and real money, and the software is gambled to fulfill its mission and its functions are as powerful as advertised.

Is the function of self-driving cars really not as powerful as advertised?

At first glance, fully autonomous driving seems more accessible than ever. Waymo has already tested cars on some limited but public roads in Arizona. Tesla and many other imitators are already selling Autopilot with limited functions, and rely on driver intervention in the event of any accident. There have been several accidents (some are fatal); but the industry generally believes that as long as the system continues to improve, we will not be too far away from not having to intervene at all.

But the dream of a fully self-driving car may be even more distant than we thought. AI experts are increasingly worried that it will be at least a few more years, or even decades, if the autonomous driving system can reliably avoid accidents. Since it is difficult for self-training systems to cope with the chaos of the real world, experts like Gary Marcus of New York University are ready to make a painful readjustment of expectations. This correction is sometimes referred to as " AI winter". This delay may have catastrophic consequences for companies that rely on autonomous driving technology, preventing an entire generation of people from experiencing fully autonomous driving.

It is not difficult to understand why car companies are optimistic about autonomous driving. In the past decade, deep learning has brought almost unimaginable progress in the AI ​​and technology industries; deep learning uses hierarchical machine learning algorithms to extract structured information from massive data sets. Deep learning supports Google search, Facebook news feed (News Feed), conversational speech/text conversion algorithms, and systems that can play Go. Outside of the Internet, we use deep learning to detect earthquakes, predict heart disease and mark suspicious behavior in shots, as well as countless other innovations that would otherwise be impossible.

But deep learning requires a lot of training data to be effective, including almost every scenario that an algorithm will encounter. For example, systems such as Google Images are good at identifying animals, as long as there is training data to show them what each animal looks like. Marcus calls this kind of task "interpolation" and investigates all the images marked as "Ocelot" to determine whether the new image belongs to the category of Ocelot.

Engineers may take a different approach to where the data comes from and how to organize the data, but this imposes strict limits on how wide an algorithm can cover. The same algorithm can't recognize an Ocelot unless it has seen thousands of photos of an Ocelot-even if it has seen photos of a domestic cat and a jaguar, and knows that the Ocelot is somewhere in between these two animals. This process is called "generalization" (generalization) and requires a different set of skills.

For a long time, researchers believe that they can improve their generalization skills with the help of appropriate algorithms, but recent studies have shown that traditional deep learning is worse than we thought in terms of generalization. A study found that when a traditional deep learning system faces different frames of a video, it is even difficult to generalize. As long as there is a small change in the video background, the same polar bear will be labeled as a baboon, a mongoose or a weasel. Since each classification is based on a total of several hundred factors, even small changes in the photos may completely change the system's judgment. Other researchers have taken advantage of this in the confrontation data set.

Marcus mentioned the chatbot boom as the latest example of hype encountering generalization problems. He said: "In 2015, some vendors promised us to launch chatbots, but they didn't do any good, because it's not just a matter of collecting data." When you talk to someone online, you don't just want the other person to repeat the previous conversation. . You want the other person to respond to what you say, using broader dialogue skills to respond specifically to you. Deep learning simply cannot make such a chatbot. Once the initial hype subsided, many companies lost confidence in the chatbot project, and now only a few companies are still actively developing it.

This makes Tesla and other self-driving companies face a terrible question: Will self-driving cars get better and better like image search, voice recognition, and other AI success stories? Or will it encounter generalization problems like chatbots? Is autonomous driving an interpolation problem or a generalization problem? How unpredictable is driving?

It is too early to know the result. Marcus said: "A self-driving car is like a scientific experiment for which we don't know the answer." We have never achieved this level of autonomous driving before, so we don't know what type of task it is. If you just recognize familiar objects and follow the rules, then the existing technology should be able to do the task. However, Marcus is worried that safe driving in accident-prone scenes may be more complicated than expected, but the industry does not want to admit it. "As long as surprising new conditions emerge, then this is not a good thing for deep learning."

The experimental data we have come from public accident reports, and each report raises an unusual problem. In a fatal car accident in 2016, a Tesla Model S crashed into the rear of a white towed trailer at full speed. The reason was that the high trailer chassis and the glare of sunlight made the car feel confused. In March, a self-driving Uber car hit and killed a woman pushing a bicycle when she was crossing an unauthorized pedestrian crossing. According to the National Transportation Safety Board (NTSB) report, Uber’s software misidentified the woman as an unknown object, then as a car, and finally as a bicycle, updating the forecast every time. In a crash in California, a Model X accelerated towards an obstacle just before the impact. The cause is still unknown.

Every accident seems to be an extreme situation, which is the kind of situation that engineers cannot be required to predict in advance. But almost every car accident involves some kind of unforeseen scenario; if there is no generalization ability, self-driving cars will have to face each of these scenarios, as if this is the first time. The result is a series of fluke accidents: over time, these accidents do not become less common or less dangerous. For skeptics, the manual disengagement report has shown that this situation has reached a halt.

Wu Enda is a former executive of Baidu, a member of Drive.AI's board of directors, and one of the most famous promoters in the industry. He believes that the problem is not to build a perfect driving system, but to train bystanders to predict autonomous driving behavior. In other words, we can ensure road safety for cars, not the other way around. As an unpredictable situation, I asked him if he thinks modern systems can handle pedestrians on pogo sticks, even if they have never seen it before. Wu Enda told me: "I think many AI teams can deal with pedestrians walking on pogo sticks on crosswalks. Having said that, it is really dangerous to step on pogo sticks in the middle of the highway."

Wu Enda said: "We should work with the government to require people to abide by the law and consider others, rather than designing AI to solve the pogo stick problem. Safety is not only related to the quality of AI technology."

Deep learning is not the only AI technology, and many companies are already exploring alternatives. Although technology is strictly protected in the industry (just look at Waymo’s recent lawsuit against Uber), many companies have turned to rule-based AI, an older technology that allows engineers to hardcode specific behaviors or logic into In other autonomous systems. It does not have the same ability to program its own behavior just by studying data, which is why deep learning is so exciting, but it allows companies to avoid some of the limitations of deep learning. But since the basic task of perception is still profoundly affected by deep learning technology, it is hard to say how successful engineers will isolate potential errors.

Ann Miura-Ko, a venture capitalist who is a member of Lyft’s board of directors, said that she believes that one aspect of the problem lies in the high hopes for the self-driving car itself and classifies any system other than fully autonomous driving as a failure. Miura Yasushi said: "The expectation that self-driving cars will enter L5 from zero is an expectation mismatch, not a technical failure. I think all these small improvements are extraordinary achievements on the road to fully autonomous driving."

However, it is not clear how long the self-driving car will remain in this dilemma. Semi-automatic products like Tesla's autopilot system are highly intelligent enough to deal with most situations, but if any unpredictable situation occurs, human intervention is still required. When something goes wrong, it's hard to know whether to blame the car or the driver. For some commentators, this kind of human-machine hybrid may not be as safe as a human driver, even if it is difficult to completely blame the machine for the error. A RAND study estimated that self-driving cars must drive 275 million miles without a fatal accident in order to prove that they are as safe as human drivers. The first death related to Tesla's autopilot system occurred when the project drove about 130 million miles, which is far below this standard.

But since deep learning is the key to how a car recognizes objects and decides to respond, reducing accident rates may be more difficult than it seems. Duke University professor Mary Cummings said of the Uber accident that caused the death of a pedestrian earlier this year: “This is not a problem that can be easily isolated. The perception-decision cycle is often interconnected. Just like in an accident that killed a pedestrian. Because of the ambiguity in perception, a decision was made not to take any action; and because of too many false alarms from the sensors, the emergency braking system was turned off."

The car accident ended with Uber's suspension of self-driving tests during the summer, which is a bad omen for other companies planning to promote the test. In this industry, many companies are racing to obtain more data to solve problems, thinking that the company with the most mileage will build the most powerful system. But although many companies think it's just a data problem, Marcus thinks it's much more difficult to solve. Marcus said: "They just use the technology they own and hope it will work. They rely on big data because it is a weapon they have on hand, but there is no evidence that it allows you to achieve the kind of precision we need. degree."

Ballpoint Pens With Stylus

Shenzhen Ruidian Technology CO., Ltd , https://www.wisonens.com