Deep learning has come a long way since the days when it could only recognize handwritten characters on checks and envelopes. Today, deep neural networks have become a key component of many computer vision applications, from photo and video editors to medical software and self-driving cars.
Roughly fashioned after the structure of the brain, neural networks have come closer to seeing the world as humans do. But they still have a long way to go, and they make mistakes in situations where humans would never err.
These situations, generally known as adversarial examples, change the behavior of an AI model in befuddling ways. Adversarial machine learning is one of the greatest challenges of current artificial intelligence systems. They can lead to machine learning models failing in unpredictable ways or becoming vulnerable to cyberattacks.
Creating AI systems that are resilient against adversarial attacks has become an active area of research and a hot topic of discussion at AI conferences. In computer vision, one interesting method to protect deep learning systems against adversarial attacks is to apply findings in neuroscience to close the gap between neural networks and the mammalian vision system.
Read more: Is neuroscience the key to protecting AI from adversarial attacks?