The automatic data classification is an essential problem in machine learning, and it applies to different contexts such as people detection, health or astronomy. In recent years, deep neural networks have gained extensive attention due to their excellent performance on large and complex datasets. A neural network is a supervised method for classification, therefore typically requires a set of inputs and targets for the training process. However, it is possible to include auxiliary outputs that characterize aspects of the object of interest, which can accelerate the learning process. For example, in an image, a person may have extra outputs like attributes given by the presence of a hat or beard. However, the classical neural networks do not consider the presence of explicit auxiliary outputs. Furthermore, these outputs might be at a lower semantic level. We propose a framework that allows for using auxiliary outputs connected to hidden layers that complement the output connected to the output layer of the network. The key idea is to improve the training process of a neural network through a variant of the standard backpropagation algorithm that considers these auxiliary outputs. The article presents experimental evidence of the advantages of the proposed idea in various real datasets. Results also show new research venues and practical applications into image recognition considering a deep learning setting.