TY - CHAP
T1 - Robust learning algorithm for the mixture of experts
AU - Allende, Hector
AU - Torres, Romina
AU - Salas, Rodrigo
AU - Moraga, Claudio
PY - 2003
Y1 - 2003
N2 - The Mixture of Experts model (ME) is a type of modular artificial neural network (MANN) whose architecture is composed by different kinds of networks who compete to learn different aspects of the problem. This model is used when the searching space is stratified. The learning algorithm of the ME model consists in estimating the network parameters to achieve a desired performance. To estimate the parameters, some distributional assumptions are made, so the learning algorithm and, consequently, the parameters obtained depends on the distribution. But when the data is exposed to outliers the assumption is not longer valid, the model is affected and is very sensible to the data as it is showed in this work. We propose a robust learning estimator by means of the generalization of the maximum likelihood estimator called M-estimator. Finally a simulation study is shown, where the robust estimator presents a better performance than the maximum likelihood estimator (MLE).
AB - The Mixture of Experts model (ME) is a type of modular artificial neural network (MANN) whose architecture is composed by different kinds of networks who compete to learn different aspects of the problem. This model is used when the searching space is stratified. The learning algorithm of the ME model consists in estimating the network parameters to achieve a desired performance. To estimate the parameters, some distributional assumptions are made, so the learning algorithm and, consequently, the parameters obtained depends on the distribution. But when the data is exposed to outliers the assumption is not longer valid, the model is affected and is very sensible to the data as it is showed in this work. We propose a robust learning estimator by means of the generalization of the maximum likelihood estimator called M-estimator. Finally a simulation study is shown, where the robust estimator presents a better performance than the maximum likelihood estimator (MLE).
KW - Artificial Neural Networks
KW - Mixtures of Experts
KW - Robust Learning
UR - http://www.scopus.com/inward/record.url?scp=35248841469&partnerID=8YFLogxK
U2 - 10.1007/978-3-540-44871-6_3
DO - 10.1007/978-3-540-44871-6_3
M3 - Chapter
AN - SCOPUS:35248841469
SN - 3540402179
SN - 9783540402176
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 19
EP - 27
BT - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
A2 - Perales, Francisco Jose
A2 - Campilho, Aurelio J. C.
A2 - Perez, Nicolas Perez
A2 - Perez, Nicolas Perez
PB - Springer Verlag
ER -