Solving Prediction Games with Parallel Batch Gradient Descent

Abstract

Learning problems in which an adversary can perturb instances at application time can be modeled as games with data-dependent cost functions. In an equilibrium point, the learner’s model parameters are the optimal reaction to the data generator’s perturbation, and vice versa. Finding an equilibrium point requires the solution of a difficult optimization problem for which both, the learner’s model parameters and the possible perturbations are free parameters. We study a perturbation model and derive optimization procedures that use a single iteration of batch-parallel gradient descent and a subsequent aggregation step, thus allowing for parallelization with minimal synchronization overhead.

Cite

Text

Großhans and Scheffer. "Solving Prediction Games with Parallel Batch Gradient Descent." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2015. doi:10.1007/978-3-319-23528-8_10

Markdown

[Großhans and Scheffer. "Solving Prediction Games with Parallel Batch Gradient Descent." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2015.](https://mlanthology.org/ecmlpkdd/2015/grohans2015ecmlpkdd-solving/) doi:10.1007/978-3-319-23528-8_10

BibTeX

@inproceedings{grohans2015ecmlpkdd-solving,
  title     = {{Solving Prediction Games with Parallel Batch Gradient Descent}},
  author    = {Großhans, Michael and Scheffer, Tobias},
  booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases},
  year      = {2015},
  pages     = {152-167},
  doi       = {10.1007/978-3-319-23528-8_10},
  url       = {https://mlanthology.org/ecmlpkdd/2015/grohans2015ecmlpkdd-solving/}
}