Self-improving generative artificial neural network for pseudorehearsal incremental class learning

Diego Mellado, Carolina Saavedra, Steren Chabert, Romina Torres, Rodrigo Salas

Resultado de la investigación: Article

Resumen

Deep learning models are part of the family of artificial neural networks and, as such, they suffer catastrophic interference when learning sequentially. In addition, the greater number of these models have a rigid architecture which prevents the incremental learning of new classes. To overcome these drawbacks, we propose the Self-Improving Generative Artificial Neural Network (SIGANN), an end-to-end deep neural network system which can ease the catastrophic forgetting problem when learning new classes. In this method, we introduce a novel detection model that automatically detects samples of new classes, and an adversarial autoencoder is used to produce samples of previous classes. This system consists of three main modules: a classifier module implemented using a Deep Convolutional Neural Network, a generator module based on an adversarial autoencoder, and a novelty-detection module implemented using an OpenMax activation function. Using the EMNIST data set, the model was trained incrementally, starting with a small set of classes. The results of the simulation show that SIGANN can retain previous knowledge while incorporating gradual forgetfulness of each learning sequence at a rate of about 7% per training step. Moreover, SIGANN can detect new classes that are hidden in the data with a median accuracy of 43% and, therefore, proceed with incremental class learning.

Idioma originalEnglish
Número de artículo206
PublicaciónAlgorithms
Volumen12
N.º10
DOI
EstadoPublished - 1 ene 2019

Huella dactilar

Artificial Neural Network
Neural networks
Module
Novelty Detection
Neural Networks
Incremental Learning
Activation Function
Classifiers
Chemical activation
Class
Learning
Model
Interference
Classifier
Generator
Simulation

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Numerical Analysis
  • Computational Theory and Mathematics
  • Computational Mathematics

Citar esto

Mellado, Diego ; Saavedra, Carolina ; Chabert, Steren ; Torres, Romina ; Salas, Rodrigo. / Self-improving generative artificial neural network for pseudorehearsal incremental class learning. En: Algorithms. 2019 ; Vol. 12, N.º 10.
@article{7e5b65448d164c35a53ba925a47a7737,
title = "Self-improving generative artificial neural network for pseudorehearsal incremental class learning",
abstract = "Deep learning models are part of the family of artificial neural networks and, as such, they suffer catastrophic interference when learning sequentially. In addition, the greater number of these models have a rigid architecture which prevents the incremental learning of new classes. To overcome these drawbacks, we propose the Self-Improving Generative Artificial Neural Network (SIGANN), an end-to-end deep neural network system which can ease the catastrophic forgetting problem when learning new classes. In this method, we introduce a novel detection model that automatically detects samples of new classes, and an adversarial autoencoder is used to produce samples of previous classes. This system consists of three main modules: a classifier module implemented using a Deep Convolutional Neural Network, a generator module based on an adversarial autoencoder, and a novelty-detection module implemented using an OpenMax activation function. Using the EMNIST data set, the model was trained incrementally, starting with a small set of classes. The results of the simulation show that SIGANN can retain previous knowledge while incorporating gradual forgetfulness of each learning sequence at a rate of about 7{\%} per training step. Moreover, SIGANN can detect new classes that are hidden in the data with a median accuracy of 43{\%} and, therefore, proceed with incremental class learning.",
keywords = "Artificial neural networks, Catastrophic interference, Deep learning, Generative neural networks, Incremental learning, Novelty detection",
author = "Diego Mellado and Carolina Saavedra and Steren Chabert and Romina Torres and Rodrigo Salas",
year = "2019",
month = "1",
day = "1",
doi = "10.3390/a12100206",
language = "English",
volume = "12",
journal = "Algorithms",
issn = "1999-4893",
publisher = "MDPI AG",
number = "10",

}

Self-improving generative artificial neural network for pseudorehearsal incremental class learning. / Mellado, Diego; Saavedra, Carolina; Chabert, Steren; Torres, Romina; Salas, Rodrigo.

En: Algorithms, Vol. 12, N.º 10, 206, 01.01.2019.

Resultado de la investigación: Article

TY - JOUR

T1 - Self-improving generative artificial neural network for pseudorehearsal incremental class learning

AU - Mellado, Diego

AU - Saavedra, Carolina

AU - Chabert, Steren

AU - Torres, Romina

AU - Salas, Rodrigo

PY - 2019/1/1

Y1 - 2019/1/1

N2 - Deep learning models are part of the family of artificial neural networks and, as such, they suffer catastrophic interference when learning sequentially. In addition, the greater number of these models have a rigid architecture which prevents the incremental learning of new classes. To overcome these drawbacks, we propose the Self-Improving Generative Artificial Neural Network (SIGANN), an end-to-end deep neural network system which can ease the catastrophic forgetting problem when learning new classes. In this method, we introduce a novel detection model that automatically detects samples of new classes, and an adversarial autoencoder is used to produce samples of previous classes. This system consists of three main modules: a classifier module implemented using a Deep Convolutional Neural Network, a generator module based on an adversarial autoencoder, and a novelty-detection module implemented using an OpenMax activation function. Using the EMNIST data set, the model was trained incrementally, starting with a small set of classes. The results of the simulation show that SIGANN can retain previous knowledge while incorporating gradual forgetfulness of each learning sequence at a rate of about 7% per training step. Moreover, SIGANN can detect new classes that are hidden in the data with a median accuracy of 43% and, therefore, proceed with incremental class learning.

AB - Deep learning models are part of the family of artificial neural networks and, as such, they suffer catastrophic interference when learning sequentially. In addition, the greater number of these models have a rigid architecture which prevents the incremental learning of new classes. To overcome these drawbacks, we propose the Self-Improving Generative Artificial Neural Network (SIGANN), an end-to-end deep neural network system which can ease the catastrophic forgetting problem when learning new classes. In this method, we introduce a novel detection model that automatically detects samples of new classes, and an adversarial autoencoder is used to produce samples of previous classes. This system consists of three main modules: a classifier module implemented using a Deep Convolutional Neural Network, a generator module based on an adversarial autoencoder, and a novelty-detection module implemented using an OpenMax activation function. Using the EMNIST data set, the model was trained incrementally, starting with a small set of classes. The results of the simulation show that SIGANN can retain previous knowledge while incorporating gradual forgetfulness of each learning sequence at a rate of about 7% per training step. Moreover, SIGANN can detect new classes that are hidden in the data with a median accuracy of 43% and, therefore, proceed with incremental class learning.

KW - Artificial neural networks

KW - Catastrophic interference

KW - Deep learning

KW - Generative neural networks

KW - Incremental learning

KW - Novelty detection

UR - http://www.scopus.com/inward/record.url?scp=85074374657&partnerID=8YFLogxK

U2 - 10.3390/a12100206

DO - 10.3390/a12100206

M3 - Article

AN - SCOPUS:85074374657

VL - 12

JO - Algorithms

JF - Algorithms

SN - 1999-4893

IS - 10

M1 - 206

ER -