Fort, Stanislav

11 publications

ICLR 2025 Scaling Laws for Adversarial Attacks on Language Model Activations and Tokens Stanislav Fort
NeurIPSW 2024 Ensemble Everything Everywhere: Multi-Scale Aggregation for Adversarial Robustness Stanislav Fort, Balaji Lakshminarayanan
NeurIPSW 2024 Standard Adversarial Attacks Only Fool the Final Layer Stanislav Fort
ICLR 2022 How Many Degrees of Freedom Do We Need to Train Deep Networks: A Loss Landscape Perspective Brett W Larsen, Stanislav Fort, Nic Becker, Surya Ganguli
NeurIPS 2021 Exploring the Limits of Out-of-Distribution Detection Stanislav Fort, Jie Ren, Balaji Lakshminarayanan
ICML 2021 On Monotonic Linear Interpolation of Neural Network Parameters James R Lucas, Juhan Bae, Michael R Zhang, Stanislav Fort, Richard Zemel, Roger B Grosse
ICLR 2021 Training Independent Subnetworks for Robust Prediction Marton Havasi, Rodolphe Jenatton, Stanislav Fort, Jeremiah Zhe Liu, Jasper Snoek, Balaji Lakshminarayanan, Andrew Mingbo Dai, Dustin Tran
NeurIPS 2020 Deep Learning Versus Kernel Learning: An Empirical Study of Loss Landscape Geometry and the Time Evolution of the Neural Tangent Kernel Stanislav Fort, Gintare Karolina Dziugaite, Mansheej Paul, Sepideh Kharaghani, Daniel M. Roy, Surya Ganguli
ICLR 2020 The Break-Even Point on Optimization Trajectories of Deep Neural Networks Stanislaw Jastrzebski, Maciej Szymczak, Stanislav Fort, Devansh Arpit, Jacek Tabor, Kyunghyun Cho, Krzysztof Geras
NeurIPS 2019 Large Scale Structure of Neural Network Loss Landscapes Stanislav Fort, Stanislaw Jastrzebski
AAAI 2019 The Goldilocks Zone: Towards Better Understanding of Neural Network Loss Landscapes Stanislav Fort, Adam Scherlis