The Future of Cyber Systems: Human-AI Reinforcement Learning with Adversarial Robustness

Abstract

Integrating adversarial machine learning (AML) with cyber data representations that support reinforcement learning would unlock human-ai systems with a capacity to dynamically defend against novel attacks, robustly, at machine speed, and with human intelligence. All machine learning (ML) has an underpinning need for robustness to natural errors and malicious tampering. However, unlike many consumer/commercial models, all ML systems built for cyber will be operating in an inherently adversarial environment with skilled adversaries taking advantage of any flaw. This paper outlines the research challenges, integration points, and programmatic importance of such a system, while highlighting the social and scientific benefits of pursuing this ambitious program.

Cite

Text

Nichols. "The Future of Cyber Systems: Human-AI   Reinforcement Learning with Adversarial Robustness." ICML 2023 Workshops: AdvML-Frontiers, 2023.

Markdown

[Nichols. "The Future of Cyber Systems: Human-AI   Reinforcement Learning with Adversarial Robustness." ICML 2023 Workshops: AdvML-Frontiers, 2023.](https://mlanthology.org/icmlw/2023/nichols2023icmlw-future/)

BibTeX

@inproceedings{nichols2023icmlw-future,
  title     = {{The Future of Cyber Systems: Human-AI   Reinforcement Learning with Adversarial Robustness}},
  author    = {Nichols, Nicole},
  booktitle = {ICML 2023 Workshops: AdvML-Frontiers},
  year      = {2023},
  url       = {https://mlanthology.org/icmlw/2023/nichols2023icmlw-future/}
}