Raginsky, Maxim

24 publications

FnTML 2025 Generalization Bounds: Perspectives from Information Theory and PAC-Bayes Fredrik Hellström, Giuseppe Durisi, Benjamin Guedj, Maxim Raginsky
L4DC 2024 Rademacher Complexity of Neural ODEs via Chen-Fliess Series Joshua Hanson, Maxim Raginsky
TMLR 2024 Transformer-Based Models Are Not yet Perfect at Learning to Emulate Structural Recursion Dylan Zhang, Curt Tigges, Zory Zhang, Stella Biderman, Maxim Raginsky, Talia Ringer
NeurIPS 2023 A Unified Framework for Information-Theoretic Generalization Bounds Yifeng Chu, Maxim Raginsky
L4DC 2023 Nonlinear Controllability and Function Representation by Neural Stochastic Differential Equations Tanya Veeravalli, Maxim Raginsky
COLT 2022 Conference on Learning Theory 2022: Preface Po-Ling Loh, Maxim Raginsky
L4DC 2022 Input-to-State Stable Neural Ordinary Differential Equations with Applications to Transient Modeling of Circuits Alan Yang, Jie Xiong, Maxim Raginsky, Elyse Rosenbaum
NeurIPS 2021 Information-Theoretic Generalization Bounds for Black-Box Learning Algorithms Hrayr Harutyunyan, Maxim Raginsky, Greg Ver Steeg, Aram Galstyan
L4DC 2021 Learning Recurrent Neural Net Models of Nonlinear Systems Joshua Hanson, Maxim Raginsky, Eduardo Sontag
UAI 2020 Model-Augmented Conditional Mutual Information Estimation for Feature Selection Alan Yang, AmirEmad Ghassami, Maxim Raginsky, Negar Kiyavash, Elyse Rosenbaum
L4DC 2020 Universal Simulation of Stable Dynamical Systems by Recurrent Neural Nets Joshua Hanson, Maxim Raginsky
COLT 2019 Theoretical Guarantees for Sampling and Inference in Generative Models with Latent Diffusions Belinda Tzen, Maxim Raginsky
NeurIPS 2019 Universal Approximation of Input-Output Maps by Temporal Convolutional Nets Joshua Hanson, Maxim Raginsky
COLT 2018 Local Optimality and Generalization Guarantees for the Langevin Algorithm via Empirical Metastability Belinda Tzen, Tengyuan Liang, Maxim Raginsky
NeurIPS 2018 Minimax Statistical Learning with Wasserstein Distances Jaeho Lee, Maxim Raginsky
ALT 2018 Sequential Prediction with Coded Side Information Under Logarithmic Loss Yanina Shkel, Maxim Raginsky, Sergio VerdĂș
NeurIPS 2017 Information-Theoretic Analysis of Generalization Capability of Learning Algorithms Aolin Xu, Maxim Raginsky
COLT 2017 Non-Convex Learning via Stochastic Gradient Langevin Dynamics: A Nonasymptotic Analysis Maxim Raginsky, Alexander Rakhlin, Matus Telgarsky
NeurIPS 2011 Lower Bounds for Passive and Active Learning Maxim Raginsky, Alexander Rakhlin
CVPR 2009 An Empirical Bayes Approach to Contextual Region Classification Svetlana Lazebnik, Maxim Raginsky
NeurIPS 2009 Locality-Sensitive Binary Codes from Shift-Invariant Kernels Maxim Raginsky, Svetlana Lazebnik
NeurIPS 2008 Near-Minimax Recursive Density Estimation on the Binary Hypercube Maxim Raginsky, Svetlana Lazebnik, Rebecca Willett, Jorge Silva
AISTATS 2007 Learning Nearest-Neighbor Quantizers from Labeled Data by Information Loss Minimization Svetlana Lazebnik, Maxim Raginsky
NeurIPS 2005 Estimation of Intrinsic Dimensionality Using High-Rate Vector Quantization Maxim Raginsky, Svetlana Lazebnik