Brain programming is immune to adversarial attacks: Towards accurate and robust image classification using symbolic learning Article uri icon

abstract

  • In recent years, the security concerns about the vulnerability of deep convolutional neural networks to adversarial attacks in slight modifications to the input image almost invisible to human vision make their predictions untrustworthy. Therefore, it is necessary to provide robustness to adversarial examples with an accurate score when developing a new classifier. In this work, we perform a comparative study of the effects of these attacks on the complex problem of art media categorization, which involves a sophisticated analysis of features to classify a fine collection of artworks. We tested a prevailing bag of visual words approach from computer vision, four deep convolutional neural networks (AlexNet, VGG, ResNet, ResNet101), and brain programming. The results showed that brain programming predictions’ change in accuracy was below 2%25 using adversarial examples from the fast gradient sign method. With a multiple-pixel attack, brain programming obtained four out of seven classes without changes and the rest with a maximum error of 4%25. Finally, brain programming got four categories without changes using adversarial patches and for the remaining three classes with an accuracy variation of 1%25. The statistical analysis confirmed that brain programming predictions’ confidence was not significantly different for each pair of clean and adversarial examples in every experiment. These results prove brain programming%27s robustness against adversarial examples compared to deep convolutional neural networks and the computer vision method for the art media categorization problem. © 2022 Elsevier B.V.

publication date

  • 2022-01-01