Date of Award

12-2023

Document Type

Thesis

Department

Computer Science

First Advisor

Manar Mohaisen

Abstract

As cybersecurity rapidly advances, machine learning methods, particularly Deep Convolutional Neural Networks (DCNNs), have become pivotal in understanding malware. While DCNNs are potent tools in malware classification, they are susceptible to adversarial attacks, where input data is altered to deceive the model. This study seeks to evaluate the effects of these attacks on the efficiency and robustness of a generalized DCNN-based malware detection system. We hypothesize that adversarial attacks will alter the mode’s confidence and accuracy rates. Using a dataset from established cybersecurity databases, which includes diverse malware samples, we reproduced a specific adversarial attack method on a trained detection system. Preliminary findings show a marked reduction in the classification accuracy of DCNNs when exposed to this method, supporting our hypothesis. Conclusively, DCNNs hold promise in malware classification but display significant vulnerabilities. This highlights an urgent need for stronger countermeasures against adversarial attacks in cybersecurity.

Share

COinS