Archives for training time sensitive AI applications
Adversarial inputs, also known as machine learning’s optical illusions, are inputs to the model an attacker has intentionally designed to confuse the algorithm into making a mistake. Such inputs can be typically dangerous for machines with a very low margin for risk. For instance, in self-driving cars, an attacker could target an autonomous vehicle with…
The post Explained: MIT Scientists’ New Reinforcement Learning Approach To Tackle Adversarial Attacks appeared first on Analytics India Magazine.