advertisement
A team of Indian American researchers has developed a novel model that uses human inputs to uncover Artificial Intelligence (AI) "blind spots" in self-driving cars, so that the vehicles can avoid dangerous errors in the real world.
The model developed by MIT and Microsoft researchers identifies instances in which autonomous systems have "learned" from training examples that don't match what's actually happening in the real world.
Engineers could use this model to improve the safety of AI systems, such as driverless vehicles and autonomous robots.
"Many times, when these systems are deployed, their trained simulations don't match the real-world setting [and] they could make mistakes, such as getting into accidents. The idea is to use humans to bridge that gap between simulation and the real world, in a safe way, so we can reduce some of those errors," explained Ramakrishnan.
The AI systems powering driverless cars are trained extensively in virtual simulations to prepare the vehicle for nearly every event on the road. But sometimes the car makes an unexpected error in the real world because an event occurs that should, but doesn't, alter the car's behaviour.
The researchers validated their method using video games, with a simulated human correcting the learned path of an on-screen character.
Co-authors on the papers are Julie Shah, an associate professor in the Department of Aeronautics and Astronautics and head of the CSAIL's Interactive Robotics Group; and Ece Kamar, Debadeepta Dey, and Eric Horvitz – all from Microsoft Research.
"When the system is deployed into the real world, it can use learned model to act more cautiously and intelligently," said Ramakrishnan.
(At The Quint, we question everything. Play an active role in shaping our journalism by becoming a member today.)