|
|
|
Yun Sha, Zhaoyu Chen, Xuejun Liu, Yong Yan, Chenchen Du, Jiayi Liu and Ranran Han
The scarcity of attack samples is the bottleneck problem of anomaly detection of underlying business data in the industrial control system. Predecessors have done a lot of research on temporal data generation, but most of them are not suitable for indust...
ver más
|
|
|
|
|
|
|
Weimin Zhao, Sanaa Alwidian and Qusay H. Mahmoud
Deep neural networks are exposed to the risk of adversarial attacks via the fast gradient sign method (FGSM), projected gradient descent (PGD) attacks, and other attack algorithms. Adversarial training is one of the methods used to defend against the thr...
ver más
|
|
|
|
|
|
|
Dapeng Lang, Deyun Chen, Sizhao Li and Yongjun He
The deep model is widely used and has been demonstrated to have more hidden security risks. An adversarial attack can bypass the traditional means of defense. By modifying the input data, the attack on the deep model is realized, and it is imperceptible ...
ver más
|
|
|
|
|
|
|
Yuting Guan, Junjiang He, Tao Li, Hui Zhao and Baoqiang Ma
SQL injection is a highly detrimental web attack technique that can result in significant data leakage and compromise system integrity. To counteract the harm caused by such attacks, researchers have devoted much attention to the examination of SQL injec...
ver más
|
|
|
|
|
|
|
Dapeng Lang, Deyun Chen, Jinjie Huang and Sizhao Li
Small perturbations can make deep models fail. Since deep models are widely used in face recognition systems (FRS) such as surveillance and access control, adversarial examples may introduce more subtle threats to face recognition systems. In this paper,...
ver más
|
|
|
|
|
|
|
Li Fan, Wei Li and Xiaohui Cui
Many deepfake-image forensic detectors have been proposed and improved due to the development of synthetic techniques. However, recent studies show that most of these detectors are not immune to adversarial example attacks. Therefore, understanding the i...
ver más
|
|
|
|
|
|
|
William Villegas-Ch, Angel Jaramillo-Alcázar and Sergio Luján-Mora
This study evaluated the generation of adversarial examples and the subsequent robustness of an image classification model. The attacks were performed using the Fast Gradient Sign method, the Projected Gradient Descent method, and the Carlini and Wagner ...
ver más
|
|
|
|
|
|
|
Meng Bi, Xianyun Yu, Zhida Jin and Jian Xu
In this paper, we propose an Iterative Greedy-Universal Adversarial Perturbations (IGUAP) approach based on an iterative greedy algorithm to create universal adversarial perturbations for acoustic prints. A thorough, objective account of the IG-UAP metho...
ver más
|
|
|
|
|
|
|
Woonghee Lee, Mingeon Ju, Yura Sim, Young Kul Jung, Tae Hyung Kim and Younghoon Kim
Deep learning-based segmentation models have made a profound impact on medical procedures, with U-Net based computed tomography (CT) segmentation models exhibiting remarkable performance. Yet, even with these advances, these models are found to be vulner...
ver más
|
|
|
|
|
|
|
Jiazhu Dai and Siwei Xiong
Capsule networks are a type of neural network that use the spatial relationship between features to classify images. By capturing the poses and relative positions between features, this network is better able to recognize affine transformation and surpas...
ver más
|
|
|
|
|
|
|
João Vitorino, Nuno Oliveira and Isabel Praça
Adversarial attacks pose a major threat to machine learning and to the systems that rely on it. In the cybersecurity domain, adversarial cyber-attack examples capable of evading detection are especially concerning. Nonetheless, an example generated for a...
ver más
|
|
|
|
|
|
|
Everton Jose Santana, Ricardo Petri Silva, Bruno Bogaz Zarpelão and Sylvio Barbon Junior
With data collected by Internet of Things sensors, deep learning (DL) models can forecast the generation capacity of photovoltaic (PV) power plants. This functionality is especially relevant for PV power operators and users as PV plants exhibit irregular...
ver más
|
|
|
|