TIWN

San Francisco, April 7 (IANS) Hackers have demonstrated how they could trick a Tesla Model S to enter into the wrong lane by using a method called "adversarial attack," a way of manipulating a machine learning (ML) model.
The Tesla Autopilot recognises lanes and assists control by identifying road traffic markings.
The researchers from the Keen Security Lab of Chinese tech giant Tencent showed that by placing interference stickers on the road, the autopilot system could be fed information that would force it to make an abnormal judgement and make the vehicle enter a wrong lane.
"In this demonstration, the researchers adjusted the physical environment (e.g. placing tape on the road or altering lane lines) around the vehicle to make the car behave differently when autopilot is in use," a Tesla spokesperson was quoted as saying in a Keen Security Lab blog.
"This is not a real world concern given that a driver can easily override autopilot at any time by using the steering wheel or brakes and should be prepared to do so at all times," the spokesperson said.
According to a report in The Download - MIT Technology Review this month, adversarial attacks could become more common as machine learning is used more widely, especially in areas like network security.
- Mexico’s 50% Tariff Rise to hit $1 Billion India Car Exports
- Indian Railways Deploys AI Enabled Intrusion Detection System to Prevent Elephant Collisions in 141 RKms on NF Railway
- Gautam Adani meets Andhra Pradesh CM Chandrababu Naidu in Amaravati
- Indian Rupee Plummets to Record Low Past 90 per US Dollar
- Trump Administration Removes Tariffs on Over 200 Food Items Including Beef


