Learning Probabilistic Relational Dynamics for Multiple Tasks
Abstract
The ways in which an agent's actions affect the world can often be modeled compactly using a set of relational probabilistic planning rules. This paper addresses the problem of learning such rule sets for multiple related tasks. We take a hierarchical Bayesian approach, in which the system learns a prior distribution over rule sets. We present a class of prior distributions parameterized by a rule set prototype that is stochastically modified to produce a task-specific rule set. We also describe a coordinate ascent algorithm that iteratively optimizes the task-specific rule sets and the prior distribution. Experiments using this algorithm show that transferring information from related tasks significantly reduces the amount of training data required to predict action effects in blocks-world domains.
Cite
Text
Deshpande et al. "Learning Probabilistic Relational Dynamics for Multiple Tasks." Conference on Uncertainty in Artificial Intelligence, 2007. doi:10.5555/3020488.3020499Markdown
[Deshpande et al. "Learning Probabilistic Relational Dynamics for Multiple Tasks." Conference on Uncertainty in Artificial Intelligence, 2007.](https://mlanthology.org/uai/2007/deshpande2007uai-learning/) doi:10.5555/3020488.3020499BibTeX
@inproceedings{deshpande2007uai-learning,
title = {{Learning Probabilistic Relational Dynamics for Multiple Tasks}},
author = {Deshpande, Ashwin and Milch, Brian and Zettlemoyer, Luke S. and Kaelbling, Leslie Pack},
booktitle = {Conference on Uncertainty in Artificial Intelligence},
year = {2007},
pages = {83-92},
doi = {10.5555/3020488.3020499},
url = {https://mlanthology.org/uai/2007/deshpande2007uai-learning/}
}