|
|
|
Neofytos Dimitriou and Ognjen Arandjelovic
Normalization as a layer within neural networks has over the years demonstrated its effectiveness in neural network optimization across a wide range of different tasks, with one of the most successful approaches being that of batch normalization. The con...
ver más
|
|
|
|
|
|
|
Xiaoyu Han, Chenyu Li, Zifan Wang and Guohua Liu
Neural architecture search (NAS) has shown great potential in discovering powerful and flexible network models, becoming an important branch of automatic machine learning (AutoML). Although search methods based on reinforcement learning and evolutionary ...
ver más
|
|
|
|
|
|
|
Leila Malihi and Gunther Heidemann
Efficient model deployment is a key focus in deep learning. This has led to the exploration of methods such as knowledge distillation and network pruning to compress models and increase their performance. In this study, we investigate the potential syner...
ver más
|
|
|
|
|
|
|
Ryota Higashimoto, Soh Yoshida and Mitsuji Muneyasu
This paper addresses the performance degradation of deep neural networks caused by learning with noisy labels. Recent research on this topic has exploited the memorization effect: networks fit data with clean labels during the early stages of learning an...
ver más
|
|
|
|
|
|
|
Xue Xing, Chengzhong Liu, Junying Han, Quan Feng, Qinglin Lu and Yongqiang Feng
Wheat is a significant cereal for humans, with diverse varieties. The growth of the wheat industry and the protection of breeding rights can be promoted through the accurate identification of wheat varieties. To recognize wheat seeds quickly and accurate...
ver más
|
|
|
|
|
|
|
Viacheslav Moskalenko, Vyacheslav Kharchenko, Alona Moskalenko and Sergey Petrov
Modern trainable image recognition models are vulnerable to different types of perturbations; hence, the development of resilient intelligent algorithms for safety-critical applications remains a relevant concern to reduce the impact of perturbation on m...
ver más
|
|
|
|
|
|
|
Guillaume Coiffier, Ghouthi Boukli Hacene and Vincent Gripon
Deep Neural Networks are state-of-the-art in a large number of challenges in machine learning. However, to reach the best performance they require a huge pool of parameters. Indeed, typical deep convolutional architectures present an increasing number of...
ver más
|
|
|
|
|
|
|
Jiyue Wang, Pei Zhang, Qianhua He, Yanxiong Li and Yongjian Hu
Label Smoothing Regularization (LSR) is a widely used tool to generalize classification models by replacing the one-hot ground truth with smoothed labels. Recent research on LSR has increasingly focused on the correlation between the LSR and Knowledge Di...
ver más
|
|
|
|
|
|
|
Wenbo Zhang, Yuchen Zhao, Fangjing Li and Hongbo Zhu
Federated learning is currently a popular distributed machine learning solution that often experiences cumbersome communication processes and challenging model convergence in practical edge deployments due to the training nature of its model information ...
ver más
|
|
|
|
|
|
|
Huoxiang Yang, Yongsheng Liang, Wei Liu and Fanyang Meng
Due to the effective guidance of prior information, feature map-based pruning methods have emerged as promising techniques for model compression. In the previous works, the undifferentiated treatment of all information on feature maps amplifies the negat...
ver más
|
|
|
|
|
|
|
Jihua Cui, Zhenbang Wang, Ziheng Yang and Xin Guan
As the number of layers of deep learning models increases, the number of parameters and computation increases, making it difficult to deploy on edge devices. Pruning has the potential to significantly reduce the number of parameters and computations in a...
ver más
|
|
|
|
|
|
|
Ilias Theodorakopoulos, Foteini Fotopoulou and George Economou
In this work, we propose a mechanism for knowledge transfer between Convolutional Neural Networks via the geometric regularization of local features produced by the activations of convolutional layers. We formulate appropriate loss functions, driving a ?...
ver más
|
|
|
|
|
|
|
Wenjing Yang, Liejun Wang, Shuli Cheng, Yongming Li and Anyu Du
Recently, deep learning to hash has extensively been applied to image retrieval, due to its low storage cost and fast query speed. However, there is a defect of insufficiency and imbalance when existing hashing methods utilize the convolutional neural ne...
ver más
|
|
|
|
|
|
|
Ivana Marin, Ana Kuzmanic Skelin and Tamara Grujic
The main goal of any classification or regression task is to obtain a model that will generalize well on new, previously unseen data. Due to the recent rise of deep learning and many state-of-the-art results obtained with deep models, deep learning archi...
ver más
|
|
|
|
|
|
|
Pei-Yin Chen and Jih-Jeng Huang
Image clustering involves the process of mapping an archive image into a cluster such that the set of clusters has the same information. It is an important field of machine learning and computer vision. While traditional clustering methods, such as k-mea...
ver más
|
|
|
|
|
|
|
Adil Redaoui, Amina Belalia and Kamel Belloulata
Deep network-based hashing has gained significant popularity in recent years, particularly in the field of image retrieval. However, most existing methods only focus on extracting semantic information from the final layer, disregarding valuable structura...
ver más
|
|
|
|
|
|
|
Danilo Pau, Andrea Pisani and Antonio Candelieri
In the context of TinyML, many research efforts have been devoted to designing forward topologies to support On-Device Learning. Reaching this target would bring numerous advantages, including reductions in latency and computational complexity, stronger ...
ver más
|
|
|
|
|
|
|
Joseph Pedersen, Rafael Muñoz-Gómez, Jiangnan Huang, Haozhe Sun, Wei-Wei Tu and Isabelle Guyon
We address the problem of defending predictive models, such as machine learning classifiers (Defender models), against membership inference attacks, in both the black-box and white-box setting, when the trainer and the trained model are publicly released...
ver más
|
|
|
|
|
|
|
Juan Manuel Fortuna-Cervantes, Marco Tulio Ramírez-Torres, Marcela Mejía-Carlos, José Salomé Murguía, José Martinez-Carranza, Carlos Soubervielle-Montalvo and César Arturo Guerra-García
Convolutional Neural Networks (CNNs) have recently been proposed as a solution in texture and material classification in computer vision. However, inside CNNs, the internal layers of pooling often cause a loss of information and, therefore, is detrimenta...
ver más
|
|
|
|
|
|
|
Huynh Cong Viet Ngu and Keon Myung Lee
Due to energy efficiency, spiking neural networks (SNNs) have gradually been considered as an alternative to convolutional neural networks (CNNs) in various machine learning tasks. In image recognition tasks, leveraging the superior capability of CNNs, t...
ver más
|
|
|
|