Inicio  /  Applied Sciences  /  Vol: 9 Par: 11 (2019)  /  Artículo
ARTÍCULO
TITULO

Boosting Targeted Black-Box Attacks via Ensemble Substitute Training and Linear Augmentation

Xianfeng Gao    
Yu-an Tan    
Hongwei Jiang    
Quanxin Zhang and Xiaohui Kuang    

Resumen

These years, Deep Neural Networks (DNNs) have shown unprecedented performance in many areas. However, some recent studies revealed their vulnerability to small perturbations added on source inputs. Furthermore, we call the ways to generate these perturbations? adversarial attacks, which contain two types, black-box and white-box attacks, according to the adversaries? access to target models. In order to overcome the problem of black-box attackers? unreachabilities to the internals of target DNN, many researchers put forward a series of strategies. Previous works include a method of training a local substitute model for the target black-box model via Jacobian-based augmentation and then use the substitute model to craft adversarial examples using white-box methods. In this work, we improve the dataset augmentation to make the substitute models better fit the decision boundary of the target model. Unlike the previous work that just performed the non-targeted attack, we make it first to generate targeted adversarial examples via training substitute models. Moreover, to boost the targeted attacks, we apply the idea of ensemble attacks to the substitute training. Experiments on MNIST and GTSRB, two common datasets for image classification, demonstrate our effectiveness and efficiency of boosting a targeted black-box attack, and we finally attack the MNIST and GTSRB classifiers with the success rates of 97.7% and 92.8%.

 Artículos similares