AAAI New Faculty Highlights: General and Scalable Optimization for Robust AI
Abstract
Deep neural networks (DNNs) can easily be manipulated (by an adversary) to output drastically different predictions and can be done so in a controlled and directed way. This process is known as adversarial attack and is considered one of the major hurdles in using DNNs in high-stakes and real-world applications. Although developing methods to secure DNNs against adversaries is now a primary research focus, it suffers from limitations such as lack of optimization generality and lack of optimization scalability. My research highlights will offer a holistic understanding of optimization foundations for robust AI, peer into their emerging challenges, and present recent solutions developed by my research group.
Cite
Text
Liu. "AAAI New Faculty Highlights: General and Scalable Optimization for Robust AI." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I13.26814Markdown
[Liu. "AAAI New Faculty Highlights: General and Scalable Optimization for Robust AI." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/liu2023aaai-aaai/) doi:10.1609/AAAI.V37I13.26814BibTeX
@inproceedings{liu2023aaai-aaai,
title = {{AAAI New Faculty Highlights: General and Scalable Optimization for Robust AI}},
author = {Liu, Sijia},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2023},
pages = {15447},
doi = {10.1609/AAAI.V37I13.26814},
url = {https://mlanthology.org/aaai/2023/liu2023aaai-aaai/}
}