Analiza robustnosti globokih nenadzorovanih metod za detekcijo vizualnih anomalij
Unsupervised generative methods have recently attracted significant attention in the field of industrial visual anomaly detection, mainly owing to their ability to learn from non anomalous data withouth requiring anomalous samples and pixel-level labels, which are costly to obtain. An assumption that anomalous data are always correctly identified and consequently removed from the training set underlies all of the generative methods. In practice, however, correctly identifying every single anomalous image can either be very costly to do or it can not be done at all due to the nature of the problem. In this paper, we analyze how robust some of the recently proposed generative methods for anomaly detection are, by introducing anomalous data in the training process. Our analysis covers 3 methods and 4 datasets with 8 categories in total, and we conclude that while some of the methods are more robust than others, introducing a minor percentage of anomalous data in the training set does not significantly deteriorate the performance.