Resumen
With the growing use of systems based on machine learning, which, from a practical point of view, are considered as systems of artificial intelligence today, the attention to the issues of reliability (robustness) of such systems and solutions is also growing. Naturally, for critical applications, for example, systems that make decisions in real time, robustness issues are the most important from the point of view of the practical use of machine learning systems. In fact, it is the robustness assessment that determines the very possibility of using machine learning in such systems. This, in a natural way, is reflected in a large number of works devoted to the issues of assessing the robustness of machine learning systems, the architecture of such systems and the protection of machine learning systems from malicious actions that can affect their operation. At the same time, it is necessary to understand that robustness problems can arise both naturally, due to the different distribution of data at the stages of training and practical application (at the stage of training the model, we use only part of the data from the general population), and as a result of targeted actions (attacks on machine learning systems). In this case, attacks can be directed both at the data and at the models themselves.