Learning Global Models Based on Distributed Data Abstractions

Abstract

Due to the increasing demand of massive and distributed data analysis, achieving highly accurate global data analysis results with local data privacy preserved becomes an increasingly important research issue. In this paper, we propose to adopt a model-based method (Gaussian mixture model) for local data abstraction and aggregate the local model parameters for learning global models. To support global model learning based on solely local GMM parameters instead of virtual data generated from the aggregated local model, a novel EM-like algorithm is derived. Experiments have been performed using synthetic datasets and the proposed method was demonstrated to be able to achieve the global model accuracy comparable to that of using the data regeneration approach at a much lower computational cost.

Cite

Text

Zhang and Cheung. "Learning Global Models Based on Distributed Data Abstractions." International Joint Conference on Artificial Intelligence, 2005.

Markdown

[Zhang and Cheung. "Learning Global Models Based on Distributed Data Abstractions." International Joint Conference on Artificial Intelligence, 2005.](https://mlanthology.org/ijcai/2005/zhang2005ijcai-learning/)

BibTeX

@inproceedings{zhang2005ijcai-learning,
  title     = {{Learning Global Models Based on Distributed Data Abstractions}},
  author    = {Zhang, Xiaofeng and Cheung, William K.},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2005},
  pages     = {1645-1646},
  url       = {https://mlanthology.org/ijcai/2005/zhang2005ijcai-learning/}
}