Learning Using Unselected Features (LUFe)

Abstract

Feature selection has been studied in machine learning and data mining for many years, and is a valuable way to improve classification accuracy while reducing model complexity. Two main classes of feature selection methods — filter and wrapper — discard those features which are not selected, and do not consider them in the predictive model. We propose that these unselected features may instead be used as an additional source of information at train time. We describe a strategy called Learning using Unselected Features (LUFe) that allows selected and unselected features to serve different functions in classification. In this framework, selected features are used directly to set the decision boundary, and unselected features are utilised in a secondary role, with no additional cost at test time. Our empirical results on 49 textual datasets show that LUFe can improve classification performance in comparison with standard wrapper and filter feature selection. PDF

Cite

Text

Taylor et al. "Learning Using Unselected Features (LUFe)." International Joint Conference on Artificial Intelligence, 2016.

Markdown

[Taylor et al. "Learning Using Unselected Features (LUFe)." International Joint Conference on Artificial Intelligence, 2016.](https://mlanthology.org/ijcai/2016/taylor2016ijcai-learning/)

BibTeX

@inproceedings{taylor2016ijcai-learning,
  title     = {{Learning Using Unselected Features (LUFe)}},
  author    = {Taylor, Joseph G. and Sharmanska, Viktoriia and Kersting, Kristian and Weir, David and Quadrianto, Novi},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2016},
  pages     = {2060-2066},
  url       = {https://mlanthology.org/ijcai/2016/taylor2016ijcai-learning/}
}