Resumen
Deep learning models are part of the family of artificial neural networks and, as such, they suffer catastrophic interference when learning sequentially. In addition, the greater number of these models have a rigid architecture which prevents the incremental learning of new classes. To overcome these drawbacks, we propose the Self-Improving Generative Artificial Neural Network (SIGANN), an end-to-end deep neural network system which can ease the catastrophic forgetting problem when learning new classes. In this method, we introduce a novel detection model that automatically detects samples of new classes, and an adversarial autoencoder is used to produce samples of previous classes. This system consists of three main modules: a classifier module implemented using a Deep Convolutional Neural Network, a generator module based on an adversarial autoencoder, and a novelty-detection module implemented using an OpenMax activation function. Using the EMNIST data set, the model was trained incrementally, starting with a small set of classes. The results of the simulation show that SIGANN can retain previous knowledge while incorporating gradual forgetfulness of each learning sequence at a rate of about 7% per training step. Moreover, SIGANN can detect new classes that are hidden in the data with a median accuracy of 43% and, therefore, proceed with incremental class learning.
Idioma original | English |
---|---|
Número de artículo | 206 |
Publicación | Algorithms |
Volumen | 12 |
N.º | 10 |
DOI | |
Estado | Published - 1 ene 2019 |
Huella dactilar
ASJC Scopus subject areas
- Theoretical Computer Science
- Numerical Analysis
- Computational Theory and Mathematics
- Computational Mathematics
Citar esto
}
Self-improving generative artificial neural network for pseudorehearsal incremental class learning. / Mellado, Diego; Saavedra, Carolina; Chabert, Steren; Torres, Romina; Salas, Rodrigo.
En: Algorithms, Vol. 12, N.º 10, 206, 01.01.2019.Resultado de la investigación: Article
TY - JOUR
T1 - Self-improving generative artificial neural network for pseudorehearsal incremental class learning
AU - Mellado, Diego
AU - Saavedra, Carolina
AU - Chabert, Steren
AU - Torres, Romina
AU - Salas, Rodrigo
PY - 2019/1/1
Y1 - 2019/1/1
N2 - Deep learning models are part of the family of artificial neural networks and, as such, they suffer catastrophic interference when learning sequentially. In addition, the greater number of these models have a rigid architecture which prevents the incremental learning of new classes. To overcome these drawbacks, we propose the Self-Improving Generative Artificial Neural Network (SIGANN), an end-to-end deep neural network system which can ease the catastrophic forgetting problem when learning new classes. In this method, we introduce a novel detection model that automatically detects samples of new classes, and an adversarial autoencoder is used to produce samples of previous classes. This system consists of three main modules: a classifier module implemented using a Deep Convolutional Neural Network, a generator module based on an adversarial autoencoder, and a novelty-detection module implemented using an OpenMax activation function. Using the EMNIST data set, the model was trained incrementally, starting with a small set of classes. The results of the simulation show that SIGANN can retain previous knowledge while incorporating gradual forgetfulness of each learning sequence at a rate of about 7% per training step. Moreover, SIGANN can detect new classes that are hidden in the data with a median accuracy of 43% and, therefore, proceed with incremental class learning.
AB - Deep learning models are part of the family of artificial neural networks and, as such, they suffer catastrophic interference when learning sequentially. In addition, the greater number of these models have a rigid architecture which prevents the incremental learning of new classes. To overcome these drawbacks, we propose the Self-Improving Generative Artificial Neural Network (SIGANN), an end-to-end deep neural network system which can ease the catastrophic forgetting problem when learning new classes. In this method, we introduce a novel detection model that automatically detects samples of new classes, and an adversarial autoencoder is used to produce samples of previous classes. This system consists of three main modules: a classifier module implemented using a Deep Convolutional Neural Network, a generator module based on an adversarial autoencoder, and a novelty-detection module implemented using an OpenMax activation function. Using the EMNIST data set, the model was trained incrementally, starting with a small set of classes. The results of the simulation show that SIGANN can retain previous knowledge while incorporating gradual forgetfulness of each learning sequence at a rate of about 7% per training step. Moreover, SIGANN can detect new classes that are hidden in the data with a median accuracy of 43% and, therefore, proceed with incremental class learning.
KW - Artificial neural networks
KW - Catastrophic interference
KW - Deep learning
KW - Generative neural networks
KW - Incremental learning
KW - Novelty detection
UR - http://www.scopus.com/inward/record.url?scp=85074374657&partnerID=8YFLogxK
U2 - 10.3390/a12100206
DO - 10.3390/a12100206
M3 - Article
AN - SCOPUS:85074374657
VL - 12
JO - Algorithms
JF - Algorithms
SN - 1999-4893
IS - 10
M1 - 206
ER -