32   Artículos

« Anterior     Página: 1 de 2     Siguiente »

 
en línea
João Vitorino, Nuno Oliveira and Isabel Praça    
Adversarial attacks pose a major threat to machine learning and to the systems that rely on it. In the cybersecurity domain, adversarial cyber-attack examples capable of evading detection are especially concerning. Nonetheless, an example generated for a... ver más
Revista: Future Internet    Formato: Electrónico

 
en línea
Dapeng Lang, Deyun Chen, Sizhao Li and Yongjun He    
The deep model is widely used and has been demonstrated to have more hidden security risks. An adversarial attack can bypass the traditional means of defense. By modifying the input data, the attack on the deep model is realized, and it is imperceptible ... ver más
Revista: Information    Formato: Electrónico

 
en línea
Jiaping Wu, Zhaoqiang Xia and Xiaoyi Feng    
In recent years, adversarial examples have aroused widespread research interest and raised concerns about the safety of CNNs. We study adversarial machine learning inspired by a support vector machine (SVM), where the decision boundary with maximum margi... ver más
Revista: Applied Sciences    Formato: Electrónico

 
en línea
Shayan Taheri, Aminollah Khormali, Milad Salem and Jiann-Shiun Yuan    
In this work, we propose a novel defense system against adversarial examples leveraging the unique power of Generative Adversarial Networks (GANs) to generate new adversarial examples for model retraining. To do so, we develop an automated pipeline using... ver más
Revista: Big Data and Cognitive Computing    Formato: Electrónico

 
en línea
James Msughter Adeke, Guangjie Liu, Junjie Zhao, Nannan Wu and Hafsat Muhammad Bashir    
Machine learning (ML) models are essential to securing communication networks. However, these models are vulnerable to adversarial examples (AEs), in which malicious inputs are modified by adversaries to produce the desired output. Adversarial training i... ver más
Revista: Future Internet    Formato: Electrónico

 
en línea
Suliman A. Alsuhibany    
The Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA) technique has been a topic of interest for several years. The ability of computers to recognize CAPTCHA has significantly increased due to the development of deep le... ver más
Revista: Applied Sciences    Formato: Electrónico

 
en línea
Saqib Ali, Sana Ashraf, Muhammad Sohaib Yousaf, Shazia Riaz and Guojun Wang    
The successful outcomes of deep learning (DL) algorithms in diverse fields have prompted researchers to consider backdoor attacks on DL models to defend them in practical applications. Adversarial examples could deceive a safety-critical system, which co... ver más
Revista: Applied Sciences    Formato: Electrónico

 
en línea
Valeria Mercuri, Martina Saletta and Claudio Ferretti    
As the prevalence and sophistication of cyber threats continue to increase, the development of robust vulnerability detection techniques becomes paramount in ensuring the security of computer systems. Neural models have demonstrated significant potential... ver más
Revista: Algorithms    Formato: Electrónico

 
en línea
Dapeng Lang, Deyun Chen, Jinjie Huang and Sizhao Li    
Small perturbations can make deep models fail. Since deep models are widely used in face recognition systems (FRS) such as surveillance and access control, adversarial examples may introduce more subtle threats to face recognition systems. In this paper,... ver más
Revista: Algorithms    Formato: Electrónico

 
en línea
Songshen Han, Kaiyong Xu, Songhui Guo, Miao Yu and Bo Yang    
Automatic Speech Recognition (ASR) provides a new way of human-computer interaction. However, it is vulnerable to adversarial examples, which are obtained by deliberately adding perturbations to the original audios. Thorough studies on the universal feat... ver más
Revista: Applied Sciences    Formato: Electrónico

 
en línea
Li Fan, Wei Li and Xiaohui Cui    
Many deepfake-image forensic detectors have been proposed and improved due to the development of synthetic techniques. However, recent studies show that most of these detectors are not immune to adversarial example attacks. Therefore, understanding the i... ver más
Revista: Future Internet    Formato: Electrónico

 
en línea
Everton Jose Santana, Ricardo Petri Silva, Bruno Bogaz Zarpelão and Sylvio Barbon Junior    
With data collected by Internet of Things sensors, deep learning (DL) models can forecast the generation capacity of photovoltaic (PV) power plants. This functionality is especially relevant for PV power operators and users as PV plants exhibit irregular... ver más
Revista: Information    Formato: Electrónico

 
en línea
Mingyong Yin, Yixiao Xu, Teng Hu and Xiaolei Liu    
Despite the success of learning-based systems, recent studies have highlighted video adversarial examples as a ubiquitous threat to state-of-the-art video classification systems. Video adversarial attacks add subtle noise to the original example, resulti... ver más
Revista: Applied Sciences    Formato: Electrónico

 
en línea
Hassan Khazane, Mohammed Ridouani, Fatima Salahdine and Naima Kaabouch    
With the rapid advancements and notable achievements across various application domains, Machine Learning (ML) has become a vital element within the Internet of Things (IoT) ecosystem. Among these use cases is IoT security, where numerous systems are dep... ver más
Revista: Future Internet    Formato: Electrónico

 
en línea
Haoxuan Qiu, Yanhui Du and Tianliang Lu    
To protect images from the tampering of deepfake, adversarial examples can be made to replace the original images by distorting the output of the deepfake model and disrupting its work. Current studies lack generalizability in that they simply focus on t... ver más
Revista: Future Internet    Formato: Electrónico

 
en línea
Yuting Guan, Junjiang He, Tao Li, Hui Zhao and Baoqiang Ma    
SQL injection is a highly detrimental web attack technique that can result in significant data leakage and compromise system integrity. To counteract the harm caused by such attacks, researchers have devoted much attention to the examination of SQL injec... ver más
Revista: Future Internet    Formato: Electrónico

 
en línea
Yibin Ruan and Jiazhu Dai    
Deep neural network has achieved great progress on tasks involving complex abstract concepts. However, there exist adversarial perturbations, which are imperceptible to humans, which can tremendously undermine the performance of deep neural network class... ver más
Revista: Future Internet    Formato: Electrónico

 
en línea
Yibin Ruan and Jiazhu Dai    
Deep neural network has achieved great progress on tasks involving complex abstract concepts. However, there exist adversarial perturbations, which are imperceptible to humans, which can tremendously undermine the performance of deep neural network class... ver más
Revista: Future Internet    Formato: Electrónico

 
en línea
William Villegas-Ch, Angel Jaramillo-Alcázar and Sergio Luján-Mora    
This study evaluated the generation of adversarial examples and the subsequent robustness of an image classification model. The attacks were performed using the Fast Gradient Sign method, the Projected Gradient Descent method, and the Carlini and Wagner ... ver más
Revista: Big Data and Cognitive Computing    Formato: Electrónico

 
en línea
Dejian Guan, Wentao Zhao and Xiao Liu    
Recent studies show that deep neural networks (DNNs)-based object recognition algorithms overly rely on object textures rather than global object shapes, and DNNs are also vulnerable to human-less perceptible adversarial perturbations. Based on these two... ver más
Revista: Applied Sciences    Formato: Electrónico

« Anterior     Página: 1 de 2     Siguiente »