Adversarial Examples: Attacking Deep Learning Models
Explore the concept of adversarial examples and practice adversarial attacking with PyTorch.
We'll cover the following
It is known that with deep learning methods that have huge numbers of parameters, sometimes more than tens of millions, it becomes more difficult for humans to comprehend what exactly they have learned, except the fact that they perform unexpectedly well in CV and NLP fields. If someone feels exceptionally comfortable using deep learning to solve each and every practical problem without a second thought, what we are about to learn in this chapter will help them realize the potential risks their models are exposed to.
What are adversarial examples, and how are they created?
Adversarial examples are a kind of sample (often modified based on real data) that are easily mistakenly classified by a machine learning system (and sometimes look normal to the human eye). Modifications to image data could be a small amount of
Get hands-on with 1400+ tech skills courses.