|
|
|
Songshen Han, Kaiyong Xu, Songhui Guo, Miao Yu and Bo Yang
Automatic Speech Recognition (ASR) provides a new way of human-computer interaction. However, it is vulnerable to adversarial examples, which are obtained by deliberately adding perturbations to the original audios. Thorough studies on the universal feat...
ver más
|
|
|
|
|
|
|
Dapeng Lang, Deyun Chen, Sizhao Li and Yongjun He
The deep model is widely used and has been demonstrated to have more hidden security risks. An adversarial attack can bypass the traditional means of defense. By modifying the input data, the attack on the deep model is realized, and it is imperceptible ...
ver más
|
|
|
|
|
|
|
Li Fan, Wei Li and Xiaohui Cui
Many deepfake-image forensic detectors have been proposed and improved due to the development of synthetic techniques. However, recent studies show that most of these detectors are not immune to adversarial example attacks. Therefore, understanding the i...
ver más
|
|
|
|
|
|
|
Sharoug Alzaidy and Hamad Binsalleeh
In the field of behavioral detection, deep learning has been extensively utilized. For example, deep learning models have been utilized to detect and classify malware. Deep learning, however, has vulnerabilities that can be exploited with crafted inputs,...
ver más
|
|
|
|
|
|
|
João Vitorino, Nuno Oliveira and Isabel Praça
Adversarial attacks pose a major threat to machine learning and to the systems that rely on it. In the cybersecurity domain, adversarial cyber-attack examples capable of evading detection are especially concerning. Nonetheless, an example generated for a...
ver más
|
|
|
|