Principle Components and Importance Ranking of Distributed Anomalies
Abstract
Correlations between locally averaged host observations, at different times and places, hint at information about the associations between the hosts in a network. These smoothed, pseudo-continuous time-series imply relationships with entities in the wider environment. For anomaly detection, mining this information might provide a valuable source of observational experience for determining comparative anomalies or rejecting false anomalies. The difficulties with distributed analysis lie in collating the distributed data and in comparing observables on different hosts, in different frames of reference. In the present work, we examine two methods (Principle Component Analysis and Eigenvector Centrality) that shed light on the usefulness of comparing data destined for different locations in a network.
Cite
Text
Begnum and Burgess. "Principle Components and Importance Ranking of Distributed Anomalies." Machine Learning, 2005. doi:10.1007/S10994-005-5827-4Markdown
[Begnum and Burgess. "Principle Components and Importance Ranking of Distributed Anomalies." Machine Learning, 2005.](https://mlanthology.org/mlj/2005/begnum2005mlj-principle/) doi:10.1007/S10994-005-5827-4BibTeX
@article{begnum2005mlj-principle,
title = {{Principle Components and Importance Ranking of Distributed Anomalies}},
author = {Begnum, Kyrre M. and Burgess, Mark},
journal = {Machine Learning},
year = {2005},
pages = {217-230},
doi = {10.1007/S10994-005-5827-4},
volume = {58},
url = {https://mlanthology.org/mlj/2005/begnum2005mlj-principle/}
}