Artificial intelligence (AI) has become a crucial technology for the development of self-driving vehicles. These AI systems are responsible for various tasks such as decision-making, predictive modeling, sensing, and more. However, recent research conducted at the University at Buffalo has raised concerns about the vulnerability of AI systems in autonomous vehicles.
The study suggests that malicious actors could potentially cause these AI systems to fail, leading to serious consequences. For example, by strategically placing 3D-printed objects on a vehicle, it could render the vehicle invisible to AI-powered radar systems, thus masking it from detection. This vulnerability could have significant implications for the automotive, tech, insurance, and other industries, as well as for government regulators and policymakers.
Led by Chunming Qiao, SUNY Distinguished Professor in the Department of Computer Science and Engineering, the research team at the University at Buffalo has been investigating the vulnerability of AI systems in self-driving vehicles. The team has published a series of papers dating back to 2021, providing insights into the potential threats posed by adversarial acts on autonomous vehicles.
In one test conducted by the researchers, they used 3D printers and metal foils to fabricate objects known as “tile masks.” By placing these masks on a vehicle, they were able to mislead the AI models in radar detection, effectively making the vehicle disappear from radar systems. This experiment demonstrated the susceptibility of AI-powered radar systems to external attacks.
The researchers highlighted the concept of adversarial examples, where AI systems can be tricked into providing incorrect information if given specific instructions that they were not trained to handle. For instance, slight modifications to an image could cause the AI system to misclassify it. Similarly, in autonomous vehicles, attackers could exploit vulnerabilities in radar detection systems to manipulate sensor data and cause potential harm.
Potential attackers could discreetly place adversarial objects on a vehicle before it begins a trip or while it is parked, leading to safety risks for the passengers and other road users. These attacks could be motivated by various factors such as insurance fraud, competition between autonomous driving companies, or personal vendettas. While the likelihood of such attacks may vary, the researchers emphasized the importance of addressing external threats to autonomous vehicles.
While researchers are actively exploring ways to mitigate these vulnerabilities, there is still a long way to go in developing foolproof defense mechanisms against adversarial attacks on AI systems in self-driving vehicles. The security of AI-powered sensors like radars, cameras, and motion planning systems must be thoroughly investigated to ensure the safety and reliability of autonomous vehicles in the future.
Leave a Reply