Shenyi Translation Bureau is a compilation team of 36Kr. It focuses on the fields of technology, commerce, workplace, life and other fields, and focuses on introducing new technologies, new ideas and new trends abroad.
Editor’s note: The new crown epidemic caused most businesses to suspend business.Even if the development of unmanned vehicles will be affected?Of course, a road test that requires personnel cooperation cannot be carried out.However, many unmanned vehicle companies have put their work on the simulation. KYLE WIGGERS conducted an inventory of this. The original text was published on Venturebeat with the title: The challenges of developing autonomous vehicles during a pandemic.The length of the relationship, we published in two parts, this is the first half.
Focus on the development of unmanned vehicles in the epidemic
In the months since the New Corona virus triggered the global issuance of home orders, all industries, companies, and economies have experienced substantial stagnation.There is a market that seems to be expected to get rid of this impact-self-driving cars, especially those unmanned vehicles that can be used to transport supplies for health care workers.But it seems that their companies are not immune.
In March, due to safety concerns, limiting the contact between drivers and passengers, Uber, Cruise, Aurora, Argo AI, Lyft and many other well-known unmanned vehicle startups and derivative companies have suspended their unmanned vehicle measurement activities..Waymo also announced that it will suspend Waymo One’s commercial operations in Phoenix, Arizona — including fully autonomous vehicles that do not require human operators — until further notice.
The interruption of the activity posed a huge engineering challenge: how could the data collected by real-world cars be replicated when the fleet stopped driving for months or longer.This problem has never been solved before, and some experts think it cannot be solved.Even Waymo CEO John Krafcik said that the actual experience in driverless car development “cannot be avoided”.
But in any case, some of the industry ’s largest players, including Waymo, are trying.
Almost every self-driving car development path relies heavily on the logs of sensors installed outside the car, including logs of lidar sensors (LiDar), cameras, radar, inertial measurement units (IMU), odometer sensors, and GPS.These data are used to train a series of machine learning models to enhance the perception, prediction and motion planning functions of unmanned vehicles.These systems need to understand the world and the objects within it, and decide the route that the vehicle will ultimately travel.
For example, Tesla compiled tens of thousands of blocked parking signs to teach models to identify similar signs in the wild.Cruise uses a combination of synthetic and real audiovisual data to train a system that can detect police vehicles, fire engines, ambulances and other emergency vehicles.
Real-world data collection also requires mapping. In the context of self-driving cars, mapping refers to the creation of 3D, high-definition, centimeter-level maps of roads, buildings, vegetation, and other static objects in this world.Before going to the new location for testing, companies such as Waymo and Cruise will deploy sensor-driven, manual-driving cars to draw routes that unmanned vehicles may travel.These maps can help those unmanned vehicles to get familiar with the whole world, and can also provide valuable environmental information, such as speed limits, the location of driving lanes and crosswalks, etc.
In the event that these things cannot be done, unmanned vehicle companies must rely on the data they have collected so far (and the perturbation or modification of these data) for system development and evaluation.Fortunately, many of these companies have already invested in simulations to expand the scale of testing, including those that are not possible in the real world.
Waymo claims that his car drives 20 million miles on its Carcraft simulation platform every day, which is equivalent to driving on real-world roads for more than 100 years.In addition, the company also stated that its self-driving car software suite Waymo Driver has accumulated more than 15 billion miles of simulated self-driving mileage so far.As of July 2019, this data is 10 billion miles.
Jonathan Karmel, Waymo ’s product director in charge of simulation and automation, said: “Carcraft stores a lot of information. So we internally use a series of tools to extract the most important signals-the most interesting trips and useful information.”
By interacting with Carcraft’s simulation through a web-based interface, Waymo engineers used real-world data to prepare for extreme situations and explore ideas-these data came from Waymo’s encounters in more than 20 million miles of autonomous driving trips in 25 citiesevent.With the continuous evolution of software and scenarios, it is possible to keep the surrounding environment of Waymo Driver up to date.This requires modeling the behavior of the research subjects (other vehicles, cyclists, pedestrians, etc.) to simulate the responses of those subjects to the new location of the virtual car.
Waymo said it can also synthesize actual sensor data for cars and model scenes in the updated environment.When the virtual car drives past the same scene that the Waymo vehicle encounters in the real world, engineers modify the scene and evaluate possible situations.They also added a new subject (such as a cyclist) to the scene by adding virtual, or by adjusting the speed of the opposing subject to evaluate the response of Waymo Driver.
Slowly, the simulation scenario will be expanded through a large number of derivatives to evaluate the expected behavior of Waymo Driver.This information can be used to improve safety and performance.Karmel said: “I see sensor simulation as an enhancement of real-world driving experiments. As the situation changes, we have the ability to gradually deepen our understanding of the real world, and as we continue to make changes to improve [our system]Performance status, we will continue to [manufacture] new simulation challenges. ”
Above: Real Waymo car data used in Waymo Carcraft simulation.
In addition to constructing scenes inspired by actual driving data, Waymo will also deploy synthetic scenes captured from its own private test track that have never been tested before.The company said the move could continue to expand the number of miles it can simulate.According to Karmel, most of the learning and development work is done in simulation-these work were completed before the updated version of Waymo Driver came out.
Comfort is often overlooked during these learning and development processes.Waymo said that he will evaluate multiple “comfort” indicators, such as how people will react to various driving behaviors of vehicles.This type of drive test feedback can be used to train AI models and run these models in a simulated environment to verify how different conditions (determining the ideal braking speed and ensuring smooth driving) will affect passenger comfort.
Karmel explained: “We started to have a better understanding of the factors that make riding comfortable. For example, things like acceleration and deceleration are one of the key elements, and we want to feed this information back into the simulation to predict passengers or drivingResponse in the real world. We have a machine learning model that can predict the response in [Carcraft]. ”
In addition to Carcraft, Waymo engineers can also use tools like Content Search, Progressive Population-Based Augmentation (PPBA, population-based progressive enhancement), Population-Based Training (PBT, population-based training) to support eachDevelopment, testing and verification work.Content Search uses technologies similar to those supporting Google Photo and Google Image Search. This technology allows data scientists to determine the location of objects in Waymo’s driving history and records.PBT collaborated with DeepMind, a subsidiary of parent company Alphabet. The tool can start with multiple machine learning models and continuously use “offspring” to replace underperforming members. Eventually, pedestrians, bicycles, and motorcycles can be mistaken for recognition tasks.Report rate decreased by 24%.As for PPBA, this thing can reduce the cost and speed up the training process while improving the quality of the object classifier, mainly because it requires training of labeled lidar data.
Cruise is also running a lot of simulations-about 200,000 hours of computing work on Google Cloud Platform every day-and one of them is an end-to-end 3D Unreal Engine environment called The Matrix by Cruise employees.This environment allows engineers to construct any situation that they can imagine, while synthesizing sensor inputs such as camera lenses, LIDAR, and radar to feed data to virtual unmanned vehicles.
Hussein Mehanna, Cruise ’s head of AI, said: “The handling of the long tail situation is why unmanned vehicles have become one of the most difficult and exciting AI problems in the world, and it is for this reason that we expect unmanned vehicles and theirThe underlying model must have a very high performance level. Just look at the training data, which includes thousands of lidar scan points, high-resolution images, radar data, and information from various other sensors. All of theseBoth require a lot of infrastructure. ”
Above: GM Cruise ’s end-to-end simulation environment in The Matrix.
Cruise has to run 30,000 instances on 300,000 processor cores and 5000 graphics cards each day, and each instance must cycle through various scenarios encountered in a single drive and generate 300 TB of results.(This is basically like driving 30,000 virtual cars at the same time.) Then the company conducts research through replay (including extracting real-world sensor data, replaying the data into the car’s software, and then manually marking the groundLive data is used for performance comparison. It also uses planning simulation, which is to create a derivative of up to hundreds of thousands of scenes by adjusting variables such as the speed of the opposing vehicle and the distance between the two vehicles.
According to Tom Boyd, Cruise ’s vice president of simulation, engineers have to make choices, such as modeling those scene elements and how granular the modeling should be.For example, they have to weigh whether the simulated tire slip (which depends on the mileage of the car, road conditions, or even the metal used for the axle) is compared to the lidar reflection or radar multipath echo of the car windshield and rearview mirror.Modeling is more important.
Cruise has another way to manage the various trade-offs of the simulation, which is to use the framework to meet the requirements of the test for different accuracy.Those who do not need 3D graphics testing can run more than 100 times in real time on commercial hardware.Boyd said: “There is no vehicle dynamics software model that can be completely accurate. The software will gradually become very complicated.It will take months. ”
The tools in Cruise’s engineering suite include Web-based Webviz, a tool derived from the hackathon project, which is currently used by approximately 1,000 employees per month.Its latest production version allows engineers to save configurations, share various parameters, and watch vehicle simulations while running on a remote server.The other is Worldview, which is a lightweight and scalable 2D / 3D scene renderer that allows engineers to quickly build custom visualizations.