In an article in the IEEE Spectrum Magazine, a security researcher, Jonathan Petit, published details on how easily it is to hack self-driving cars by mimicking LIDAR signals. Using an off-the-shelf Raspberry Pi or an Arduino along with a low-powered laser he was able to demonstrate a proof-of-concept hack of the self-driving car’s alerting system.
“I can take echoes of a fake car and put them at any location I want,” says Jonathan Petit, Principal Scientist at Security Innovation, a software security company. “And I can do the same with a pedestrian or a wall.”
Using such a system, attackers could trick a self-driving car into thinking something is directly ahead of it, thus forcing it to slow down. Or they could overwhelm it with so many spurious signals that the car would not move at all for fear of hitting phantom obstacles.
Unlike other short-range radars used by self-driving cars that use frequencies operating in bands requiring a license, LIDAR systems that use pulses of light to generate 3-D pictures of the environment are easily hacked. If one knows what they’re doing they can generate phantom obstacles or pedestrians around the vehicle leading to erratic behavior of the affected vehicle.
Petit was able to create the illusion of a fake car, wall, or pedestrian anywhere from 20 to 350 meters from the lidar unit, and make multiple copies of the simulated obstacles, and even make them move. “I can spoof thousands of objects and basically carry out a denial of service attack on the tracking system so it’s not able to track real objects,” he says. Petit’s attack worked at distances up to 100 meters, in front, to the side or even behind the lidar being attacked and did not require him to target the lidar precisely with a narrow beam
It’s some comfort that these hacks are being explored early as the self-driving car industry grows. Understanding these types of attacks should help refine the guidance systems. Ultimate safety from hacking will be an adoption requirement in the coming years.