Linear Contour Learning: A Method for Supervised Dimension Reduction

Abstract

We propose a novel approach to sufficient dimension reduction in regression, based on estimating contour directions of negligible variation for the response surface. These directions span the orthogonal complement of the minimal space relevant for the regression, and can be extracted according to a measure of the variation in the response, leading to General Contour Regression (GCR). In comparison to existing sufficient dimension reduction techniques, this contour-based methodology guarantees exhaustive estimation of the central space under ellipticity of the predictor distribution and very mild additional assumptions, while maintaining √n-consistency and computational ease. Moreover, it proves to be robust to departures from ellipticity. We also establish some useful population properties for GCR. Simulations to compare performance with that of standard techniques such as ordinary least squares, sliced inverse regression, principal hessian directions, and sliced average variance estimation confirm the advantages anticipated by theoretical analyses. We also demonstrate the use of contour-based methods on a data set concerning grades of students from Massachusetts colleges.

Cite

Text

Li et al. "Linear Contour Learning: A Method for Supervised Dimension Reduction." Conference on Uncertainty in Artificial Intelligence, 2004.

Markdown

[Li et al. "Linear Contour Learning: A Method for Supervised Dimension Reduction." Conference on Uncertainty in Artificial Intelligence, 2004.](https://mlanthology.org/uai/2004/li2004uai-linear/)

BibTeX

@inproceedings{li2004uai-linear,
  title     = {{Linear Contour Learning: A Method for Supervised Dimension Reduction}},
  author    = {Li, Bing and Zha, Hongyuan and Chiaromonte, Francesca},
  booktitle = {Conference on Uncertainty in Artificial Intelligence},
  year      = {2004},
  pages     = {346-356},
  url       = {https://mlanthology.org/uai/2004/li2004uai-linear/}
}