Welcome to the Qinsun Instruments Co., LTD! Set to the home page | Collect this site
The service hotline

Search


Related Articles

Product Photo

Contact Us

Qinsun Instruments Co., LTD!
Address:NO.258 Banting Road., Jiuting Town, Songjiang District, Shanghai
Tel:021-67801892
Phone:13671843966
E-mail:info@standard-groups.com
Web:http://www.qinsun-lab.com

Your location: Home > Related Articles > The United States developed new methods to acquire data and train autonomous vehicle tracking system

The United States developed new methods to acquire data and train autonomous vehicle tracking system

Author:QINSUN Released in:2024-01 Click:63

For safety reasons, autonomous vehicle must be able to accurately track the movement of pedestrians, bicycles and other vehicles around them. Now, according to foreign media reports, Carnegie Mellon University in the United States has developed a new method to train such tracking systems more efficiently.

Generally speaking, the more road and traffic data used to train tracking systems, the better the results will be. To this end, researchers at Carnegie Mellon University have developed a new method for unlocking large amounts of autonomous driving data.

Most autonomous vehicle mainly rely on a sensor called laser radar for navigation. Lidar is a laser device that generates 3D information about the surrounding environment of a vehicle, which is not an image, but a point cloud. Vehicles use a technique called scene flow to understand such data, which includes calculating the velocity and trajectory of each 3D point cloud. A group of point clouds moving together is defined as vehicles, pedestrians, or other moving objects through scene flow.

In the past, training such systems required the use of labeled datasets, where sensor data has been annotated and tracked over time for each 3D point cloud. However, manually labeling such datasets is both labor-intensive and expensive, so there is almost no labeled data available. On the contrary, scene flow training is usually conducted using simulated data, which is less efficient. Afterwards, a small amount of annotated real-world data is used for fine-tuning.

Researchers at Carnegie Mellon University used different methods to train scene streams using unlabeled data. Because it is relatively simple to generate unmarked data by installing LiDAR on the car and driving the vehicle around, and there will be no shortage of data.

The key to this method is to develop a method that enables the system to detect its own errors in the scene flow. At every moment, the system attempts to predict the direction and speed of movement of each 3D point cloud. In the next instant, the system will be able to measure the distance between the predicted position of the point cloud and the actual position close to the predicted position of the point cloud. This distance is an error and needs to be minimized as much as possible.

Then, the system will reverse the process, starting from the predicted point cloud position and mapping back to the starting position of the point cloud. Therefore, the distance between the predicted position and the actual starting position will be measured, resulting in the second type of error.

Then, the system will correct such errors.

Although it may sound complicated, researchers have found that this method is very effective. Researchers have calculated that the accuracy of using synthetic data training sets to perform scene stream training is only 25%. When a small amount of real-world labeled data is used for fine-tuning the synthesized data, the accuracy improves to 31%; When a large amount of unlabeled data is added to train the system using their methods, the accuracy of the scene stream jumps to 46%.

Prev:

Next: