Safety Validation of Learning-Based Autonomous Systems: A Multi-Fidelity Approach
Abstract
In recent years, learning-based autonomous systems have emerged as a promising tool for automating many crucial tasks. The key question is how we can build trust in such systems for safety-critical applications. My research aims to focus on the creation and validation of safety frameworks that leverage multiple sources of information. The ultimate goal is to establish a solid foundation for a long-term research program aimed at understanding the role of fidelity in simulators for safety validation and robot learning.
Cite
Text
Baheri. "Safety Validation of Learning-Based Autonomous Systems: A Multi-Fidelity Approach." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I13.26799Markdown
[Baheri. "Safety Validation of Learning-Based Autonomous Systems: A Multi-Fidelity Approach." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/baheri2023aaai-safety/) doi:10.1609/AAAI.V37I13.26799BibTeX
@inproceedings{baheri2023aaai-safety,
title = {{Safety Validation of Learning-Based Autonomous Systems: A Multi-Fidelity Approach}},
author = {Baheri, Ali},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2023},
pages = {15432},
doi = {10.1609/AAAI.V37I13.26799},
url = {https://mlanthology.org/aaai/2023/baheri2023aaai-safety/}
}