TY - GEN
T1 - A Proposal of Neural Networks with Intermediate Outputs
AU - Peralta, Billy
AU - Reyes, Juan
AU - Caro, Luis
AU - Pieringer, Christian
PY - 2019/1/1
Y1 - 2019/1/1
N2 - The automatic data classification is an essential problem in machine learning, and it applies to different contexts such as people detection, health or astronomy. In recent years, deep neural networks have gained extensive attention due to their excellent performance on large and complex datasets. A neural network is a supervised method for classification, therefore typically requires a set of inputs and targets for the training process. However, it is possible to include auxiliary outputs that characterize aspects of the object of interest, which can accelerate the learning process. For example, in an image, a person may have extra outputs like attributes given by the presence of a hat or beard. However, the classical neural networks do not consider the presence of explicit auxiliary outputs. Furthermore, these outputs might be at a lower semantic level. We propose a framework that allows for using auxiliary outputs connected to hidden layers that complement the output connected to the output layer of the network. The key idea is to improve the training process of a neural network through a variant of the standard backpropagation algorithm that considers these auxiliary outputs. The article presents experimental evidence of the advantages of the proposed idea in various real datasets. Results also show new research venues and practical applications into image recognition considering a deep learning setting.
AB - The automatic data classification is an essential problem in machine learning, and it applies to different contexts such as people detection, health or astronomy. In recent years, deep neural networks have gained extensive attention due to their excellent performance on large and complex datasets. A neural network is a supervised method for classification, therefore typically requires a set of inputs and targets for the training process. However, it is possible to include auxiliary outputs that characterize aspects of the object of interest, which can accelerate the learning process. For example, in an image, a person may have extra outputs like attributes given by the presence of a hat or beard. However, the classical neural networks do not consider the presence of explicit auxiliary outputs. Furthermore, these outputs might be at a lower semantic level. We propose a framework that allows for using auxiliary outputs connected to hidden layers that complement the output connected to the output layer of the network. The key idea is to improve the training process of a neural network through a variant of the standard backpropagation algorithm that considers these auxiliary outputs. The article presents experimental evidence of the advantages of the proposed idea in various real datasets. Results also show new research venues and practical applications into image recognition considering a deep learning setting.
KW - Data classification
KW - Neural network
KW - Supervised learning
UR - http://www.scopus.com/inward/record.url?scp=85076090692&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-31332-6_18
DO - 10.1007/978-3-030-31332-6_18
M3 - Conference contribution
AN - SCOPUS:85076090692
SN - 9783030313319
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 206
EP - 215
BT - Pattern Recognition and Image Analysis - 9th Iberian Conference, IbPRIA 2019, Proceedings
A2 - Morales, Aythami
A2 - Fierrez, Julian
A2 - Sánchez, José Salvador
A2 - Ribeiro, Bernardete
PB - Springer
T2 - 9th Iberian Conference on Pattern Recognition and Image Analysis, IbPRIA 2019
Y2 - 1 July 2019 through 4 July 2019
ER -