国产一级免费电影,国产自产在线视频一区,国产一区亚洲,久久国产精品视频,日本美女天天操b,九色激情,成年轻人网站色直接看

New system helps self-driving cars predict pedestrian movement

Source: Xinhua| 2019-02-13 08:25:31|Editor: WX
Video PlayerClose

CHICAGO, Feb. 12 (Xinhua) -- Researchers at the University of Michigan (UM) are teaching self-driving cars to recognize and predict pedestrian movements with greater precision by zeroing in on humans' gait, body symmetry and foot placement.

According to a news released posted on UM's website Tuesday, the researchers captured video snippets of humans in motion in data collected by vehicles through cameras, LiDAR and GPS, and recreated them in 3D computer simulation.

And based on this, they've created a "biomechanically inspired recurrent neural network" that catalogs human movements, with which they can predict poses and future locations for one or several pedestrians up to about 50 yards from the vehicle, about the scale of a city intersection.

The results have shown that this new system improves upon a driverless vehicle's capacity to recognize what's most likely to happen next.

"The median translation error of our prediction was approximately 10 cm after one second and less than 80 cm after six seconds. All other comparison methods were up to 7 meters off," said Matthew Johnson-Roberson, associate professor in UM's Department of Naval Architecture and Marine Engineering. "We're better at figuring out where a person is going to be."

To rein in the number of options for predicting the next movement, the researchers applied the physical constraints of the human body: human's inability to fly or fastest possible speed on foot.

"Now, we're training the system to recognize motion and making predictions of not just one single thing, whether it's a stop sign or not, but where that pedestrian's body will be at the next step and the next and the next," said Johnson-Roberson.

Prior work in the area typically looked only at still images. It wasn't really concerned with how people move in three dimensions, said Ram Vasudevan, UM assistant professor of mechanical engineering.

By utilizing video clips that run for several seconds, the UM system can study the first half of the snippet to make its predictions, and then verify the accuracy with the second half.

"We are open to diverse applications and exciting interdisciplinary collaboration opportunities, and we hope to create and contribute to a safer, healthier, and more efficient living environment," said UM research engineer Xiaoxiao Du.

The study has been published online in IEEE Robotics and Automation Letters, and will appear in a forthcoming print edition.

TOP STORIES
EDITOR’S CHOICE
MOST VIEWED
EXPLORE XINHUANET
010020070750000000000000011100901378174941