Archives for adversarial attacks

01 Aug

How To Confuse a Neural Network Using Fast Gradient Sign Method?

image-24777
image-24777

Many machine learning models, including neural networks, consistently misclassify the adversarial examples. Adversarial examples are nothing but specialised inputs created to confuse neural networks, ultimately resulting in misclassification of the result. These notorious inputs are almost the same as the original image to human eyes but cause a neural network to fail to identify the image’s content.

The post How To Confuse a Neural Network Using Fast Gradient Sign Method? appeared first on Analytics India Magazine.

11 Mar

Explained: MIT Scientists’ New Reinforcement Learning Approach To Tackle Adversarial Attacks

Adversarial inputs, also known as machine learning’s optical illusions, are inputs to the model an attacker has intentionally designed to confuse the algorithm into making a mistake. Such inputs can be typically dangerous for machines with a very low margin for risk. For instance, in self-driving cars, an attacker could target an autonomous vehicle with…

The post Explained: MIT Scientists’ New Reinforcement Learning Approach To Tackle Adversarial Attacks appeared first on Analytics India Magazine.

13 Oct

What Is Poisoning Attack & Why It Deserves Immediate Attention

image-16638
image-16638

In a study by IDC, it was found that the global cybersecurity market was worth $107 million in 2019 and is poised to grow up to $151 million by 2023. While most of this expenditure is towards designing software and hardware for protecting systems from hacking or compromising networks, an area which is often overlooked…

The post What Is Poisoning Attack & Why It Deserves Immediate Attention appeared first on Analytics India Magazine.

15 Aug

How To Deter Adversarial Attacks In Computer Vision Models

While computer vision has become one of the most used technologies across the globe, computer vision models are not immune to threats. One of the reasons for this threat is the underlying lack of robustness of the models. Indrajit Kar, who is the Principal Solution Architect at Accenture, took through a talk at CVDC 2020…

The post How To Deter Adversarial Attacks In Computer Vision Models appeared first on Analytics India Magazine.

13 Jul

How To Secure Deep Learning Models From Adversarial Attacks

image-13989
image-13989

With recent advancements in deep learning, it has become critical to improve the robustness of the deployed algorithms. Vulnerability to adversarial samples has always been a critical concern while implementing these DL models for safety-critical tasks like autonomous driving, fraud detection, and facial recognition. Such adversarial inputs are usually undetectable to the human eye. However,…

The post How To Secure Deep Learning Models From Adversarial Attacks appeared first on Analytics India Magazine.

13 Apr

Researchers Dug Deeper Into Deepfake To Uncover Can Of Worms

Google, along with the University of California at Berkeley, has recently published a paper that makes a claim that best forensic classifiers (trained AI that distinguishes between real and synthetic) are prone to adversarial attacks. This follows a past work of researchers from the University of California at San Diego, who proved that it is…

The post Researchers Dug Deeper Into Deepfake To Uncover Can Of Worms appeared first on Analytics India Magazine.

10 Feb

How To Fool AI With Adversarial Attacks

Research in adversarial attacks has been the latest trend in technology, where developers, experts, and scientists are trying to trick AI bots by making subtle changes. Undoubtedly, ML models perform miserably if they are evaluated in a completely different environment as we are yet to develop an AI that can generalise and deliver superior results…

The post How To Fool AI With Adversarial Attacks appeared first on Analytics India Magazine.