Embedded local feature selection within mixture of experts

Billy Peralta, Alvaro Soto

Resultado de la investigación: Article

27 Citas (Scopus)

Resumen

A useful strategy to deal with complex classification scenarios is the "divide and conquer" approach. The mixture of experts (MoE) technique makes use of this strategy by jointly training a set of classifiers, or experts, that are specialized in different regions of the input space. A global model, or gate function, complements the experts by learning a function that weighs their relevance in different parts of the input space. Local feature selection appears as an attractive alternative to improve the specialization of experts and gate function, particularly, in the case of high dimensional data. In general, subsets of dimensions, or subspaces, are usually more appropriate to classify instances located in different regions of the input space. Accordingly, this work contributes with a regularized variant of MoE that incorporates an embedded process for local feature selection using L1 regularization. Experiments using artificial and real-world datasets provide evidence that the proposed method improves the classical MoE technique, in terms of accuracy and sparseness of the solution. Furthermore, our results indicate that the advantages of the proposed technique increase with the dimensionality of the data.

Idioma originalEnglish
Páginas (desde-hasta)176-187
Número de páginas12
PublicaciónInformation Sciences
Volumen269
DOI
EstadoPublished - 10 jun 2014

Huella dactilar

Mixture of Experts
Local Features
Feature Selection
Feature extraction
Divide and conquer
High-dimensional Data
Specialization
Dimensionality
Regularization
Classifiers
Complement
Classify
Classifier
Subspace
Scenarios
Subset
Alternatives
Experiment
Feature selection
Experiments

ASJC Scopus subject areas

  • Software
  • Control and Systems Engineering
  • Theoretical Computer Science
  • Computer Science Applications
  • Information Systems and Management
  • Artificial Intelligence

Citar esto

@article{3c73a25225f14b05a6af42b316139d70,
title = "Embedded local feature selection within mixture of experts",
abstract = "A useful strategy to deal with complex classification scenarios is the {"}divide and conquer{"} approach. The mixture of experts (MoE) technique makes use of this strategy by jointly training a set of classifiers, or experts, that are specialized in different regions of the input space. A global model, or gate function, complements the experts by learning a function that weighs their relevance in different parts of the input space. Local feature selection appears as an attractive alternative to improve the specialization of experts and gate function, particularly, in the case of high dimensional data. In general, subsets of dimensions, or subspaces, are usually more appropriate to classify instances located in different regions of the input space. Accordingly, this work contributes with a regularized variant of MoE that incorporates an embedded process for local feature selection using L1 regularization. Experiments using artificial and real-world datasets provide evidence that the proposed method improves the classical MoE technique, in terms of accuracy and sparseness of the solution. Furthermore, our results indicate that the advantages of the proposed technique increase with the dimensionality of the data.",
keywords = "Embedded feature selection, Local feature selection, Mixture of experts, Regularization",
author = "Billy Peralta and Alvaro Soto",
year = "2014",
month = "6",
day = "10",
doi = "10.1016/j.ins.2014.01.008",
language = "English",
volume = "269",
pages = "176--187",
journal = "Information Sciences",
issn = "0020-0255",
publisher = "Elsevier Inc.",

}

Embedded local feature selection within mixture of experts. / Peralta, Billy; Soto, Alvaro.

En: Information Sciences, Vol. 269, 10.06.2014, p. 176-187.

Resultado de la investigación: Article

TY - JOUR

T1 - Embedded local feature selection within mixture of experts

AU - Peralta, Billy

AU - Soto, Alvaro

PY - 2014/6/10

Y1 - 2014/6/10

N2 - A useful strategy to deal with complex classification scenarios is the "divide and conquer" approach. The mixture of experts (MoE) technique makes use of this strategy by jointly training a set of classifiers, or experts, that are specialized in different regions of the input space. A global model, or gate function, complements the experts by learning a function that weighs their relevance in different parts of the input space. Local feature selection appears as an attractive alternative to improve the specialization of experts and gate function, particularly, in the case of high dimensional data. In general, subsets of dimensions, or subspaces, are usually more appropriate to classify instances located in different regions of the input space. Accordingly, this work contributes with a regularized variant of MoE that incorporates an embedded process for local feature selection using L1 regularization. Experiments using artificial and real-world datasets provide evidence that the proposed method improves the classical MoE technique, in terms of accuracy and sparseness of the solution. Furthermore, our results indicate that the advantages of the proposed technique increase with the dimensionality of the data.

AB - A useful strategy to deal with complex classification scenarios is the "divide and conquer" approach. The mixture of experts (MoE) technique makes use of this strategy by jointly training a set of classifiers, or experts, that are specialized in different regions of the input space. A global model, or gate function, complements the experts by learning a function that weighs their relevance in different parts of the input space. Local feature selection appears as an attractive alternative to improve the specialization of experts and gate function, particularly, in the case of high dimensional data. In general, subsets of dimensions, or subspaces, are usually more appropriate to classify instances located in different regions of the input space. Accordingly, this work contributes with a regularized variant of MoE that incorporates an embedded process for local feature selection using L1 regularization. Experiments using artificial and real-world datasets provide evidence that the proposed method improves the classical MoE technique, in terms of accuracy and sparseness of the solution. Furthermore, our results indicate that the advantages of the proposed technique increase with the dimensionality of the data.

KW - Embedded feature selection

KW - Local feature selection

KW - Mixture of experts

KW - Regularization

UR - http://www.scopus.com/inward/record.url?scp=84897053423&partnerID=8YFLogxK

U2 - 10.1016/j.ins.2014.01.008

DO - 10.1016/j.ins.2014.01.008

M3 - Article

VL - 269

SP - 176

EP - 187

JO - Information Sciences

JF - Information Sciences

SN - 0020-0255

ER -