Ranking Under Uncertainty
Abstract
Ranking objects is a simple and natural procedure for organizing data. It is often performed by assigning a quality score to each object according to its relevance to the problem at hand. Ranking is widely used for object selection, when resources are limited and it is necessary to select a subset of most relevant objects for further processing. In real world situations, the object's scores are often calculated from noisy measurements, casting doubt on the ranking reliability. We introduce an analytical method for assessing the influence of noise levels on the ranking reliability. We use two similarity measures for reliability evaluation, Top-K-List overlap and Kendall's tau measure, and show that the former is much more sensitive to noise than the latter. We apply our method to gene selection in a series of microarray experiments of several cancer types. The results indicate that the reliability of the lists obtained from these experiments is very poor, and that experiment sizes which are necessary for attaining reasonably stable Top-K-Lists are much larger than those currently available. Simulations support our analytical results.
Cite
Text
Zuk et al. "Ranking Under Uncertainty." Conference on Uncertainty in Artificial Intelligence, 2007.Markdown
[Zuk et al. "Ranking Under Uncertainty." Conference on Uncertainty in Artificial Intelligence, 2007.](https://mlanthology.org/uai/2007/zuk2007uai-ranking/)BibTeX
@inproceedings{zuk2007uai-ranking,
title = {{Ranking Under Uncertainty}},
author = {Zuk, Or and Ein-Dor, Liat and Domany, Eytan},
booktitle = {Conference on Uncertainty in Artificial Intelligence},
year = {2007},
pages = {466-473},
url = {https://mlanthology.org/uai/2007/zuk2007uai-ranking/}
}