Synthesis from Satisficing and Temporal Goals

Abstract

Reactive synthesis from high-level specifications that combine hard constraints expressed in Linear Temporal Logic (LTL) with soft constraints expressed by discounted sum (DS) rewards has applications in planning and reinforcement learning. An existing approach combines techniques from LTL synthesis with optimization for the DS rewards but has failed to yield a sound algorithm. An alternative approach combining LTL synthesis with satisficing DS rewards (rewards that achieve a threshold) is sound and complete for integer discount factors, but, in practice, a fractional discount factor is desired. This work extends the existing satisficing approach, presenting the first sound algorithm for synthesis from LTL and DS rewards with fractional discount factors. The utility of our algorithm is demonstrated on robotic planning domains.

Cite

Text

Bansal et al. "Synthesis from Satisficing and Temporal Goals." AAAI Conference on Artificial Intelligence, 2022. doi:10.1609/AAAI.V36I9.21202

Markdown

[Bansal et al. "Synthesis from Satisficing and Temporal Goals." AAAI Conference on Artificial Intelligence, 2022.](https://mlanthology.org/aaai/2022/bansal2022aaai-synthesis/) doi:10.1609/AAAI.V36I9.21202

BibTeX

@inproceedings{bansal2022aaai-synthesis,
  title     = {{Synthesis from Satisficing and Temporal Goals}},
  author    = {Bansal, Suguman and Kavraki, Lydia E. and Vardi, Moshe Y. and Wells, Andrew M.},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2022},
  pages     = {9679-9686},
  doi       = {10.1609/AAAI.V36I9.21202},
  url       = {https://mlanthology.org/aaai/2022/bansal2022aaai-synthesis/}
}