Show simple item record

dc.contributor.authorPontes-Filho, Sidney
dc.contributor.authorLiwicki, Marcus
dc.date.accessioned2020-02-05T10:33:58Z
dc.date.accessioned2020-03-06T10:41:32Z
dc.date.available2020-02-05T10:33:58Z
dc.date.available2020-03-06T10:41:32Z
dc.date.issued2019
dc.identifier.citationPontes-Filho, Liwicki: Bidirectional Learning for Robust Neural Networks. In: Jayne C, Somogyvári. 2019 International Joint Conference on Neural Networks (IJCNN) , 2019. IEEE p. 1-8en
dc.identifier.isbn978-1-7281-1985-4
dc.identifier.issn2161-4393
dc.identifier.issn2161-4407
dc.identifier.urihttps://hdl.handle.net/10642/8231
dc.description.abstractA multilayer perceptron can behave as a generative classifier by applying bidirectional learning (BL). It consists of training an undirected neural network to map input to output and vice-versa; therefore it can produce a classifier in one direction, and a generator in the opposite direction for the same data. The learning process of BL tries to reproduce the neuroplasticity stated in Hebbian theory using only backward propagation of errors. In this paper, two learning techniques are independently introduced which use BL for improving robustness to white noise static and adversarial examples. The first method is \textit{bidirectional propagation of errors}, which the error propagation occurs in backward and forward directions. Motivated by the fact that its generative model receives as input a constant vector per class, we introduce as a second method the novel \textit{hybrid adversarial networks} (HAN). Its generative model receives a random vector as input and its training is based on generative adversarial networks (GAN). To assess the performance of BL, we perform experiments using several architectures with fully and convolutional layers, with and without bias. Experimental results show that both methods improve robustness to white noise static and adversarial examples, and even increase accuracy, but have different behavior depending on the architecture and task, being more beneficial to use the one or the other. Nevertheless, HAN using a convolutional architecture with batch normalization presents outstanding robustness, reaching state-of-the-art accuracy on adversarial examples of hand-written digits.en
dc.description.sponsorshipPartially funded by Norwegian Research Council under SOCRATES project (grant number 270961).en
dc.language.isoenen
dc.relation.ispartof2019 International Joint Conference on Neural Networks (IJCNN)
dc.relation.ispartofseriesNeural Networks (IJCNN), International Joint Conference on; 2019 International Joint Conference on Neural Networks (IJCNN)
dc.relation.urihttps://ieeexplore.ieee.org/document/8852120
dc.rights© 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. S. Pontes-Filho and M. Liwicki, "Bidirectional Learning for Robust Neural Networks," 2019 International Joint Conference on Neural Networks (IJCNN), Budapest, Hungary, 2019, pp. 1-8. DOI: https://dx.doi.org/10.1109/IJCNN.2019.8852120en
dc.subjectAdversarial example defenseen
dc.subjectHebbian theoriesen
dc.subjectNoise defenseen
dc.subjectBidirectional learningen
dc.subjectHybrid neural networks
dc.titleBidirectional Learning for Robust Neural Networksen
dc.typeChapteren
dc.typePeer revieweden
dc.date.updated2020-02-05T10:33:58Z
dc.description.versionacceptedVersionen
dc.identifier.doihttps://dx.doi.org/10.1109/IJCNN.2019.8852120
dc.identifier.cristin1735344
dc.source.journalNeural Networks (IJCNN), International Joint Conference on
dc.source.isbn978-1-7281-1985-4


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record