contact@ijirct.org      

 

Publication Number

2412095

 

Page Numbers

1-6

 

Paper Details

Adversarial Attacks and Defenses: Investigating How AI Systems Can Be Manipulated Through Adversarial Inputs and Methods to Defend Against Them

Authors

Gaurav Kashyap

Abstract

Artificial intelligence (AI) systems have advanced significantly and are now used in important fields like finance, healthcare, and autonomous driving. Their extensive use has, however, exposed a serious flaw: their vulnerability to hostile attacks. These attacks use tiny, well-planned changes to input data to make AI models behave badly or predict things incorrectly, frequently undetected by humans. The nature of adversarial attacks on AI systems, how they are created, their ramifications, and the different defense strategies that have been put forth to protect AI models are all examined in this paper. Our goal is to improve knowledge and resilience against adversarial threats in practical applications by offering a summary of the main adversarial attack methods and defenses.

Keywords

Artificial Intelligence (AI), Adversarial Attack, Adversarial Defense

 

. . .

Citation

Adversarial Attacks and Defenses: Investigating How AI Systems Can Be Manipulated Through Adversarial Inputs and Methods to Defend Against Them. Gaurav Kashyap. 2021. IJIRCT, Volume 7, Issue 1. Pages 1-6. https://www.ijirct.org/viewPaper.php?paperId=2412095

Download/View Paper

 

Download/View Count

 

Share This Article