Hubara, Itay

12 publications

TMLR 2025 Foldable SuperNets: Scalable Merging of Transformers with Different Initializations and Tasks Edan Kinderman, Itay Hubara, Haggai Maron, Daniel Soudry
ICLR 2024 Towards Cheaper Inference in Deep Networks with Lower Bit-Width Accumulators Yaniv Blumenfeld, Itay Hubara, Daniel Soudry
ICLR 2023 Minimum Variance Unbiased N:M Sparsity for the Neural Gradients Brian Chmiel, Itay Hubara, Ron Banner, Daniel Soudry
NeurIPSW 2023 Towards Cheaper Inference in Deep Networks with Lower Bit-Width Accumulators Yaniv Blumenfeld, Itay Hubara, Daniel Soudry
NeurIPS 2021 Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N:M Transposable Masks Itay Hubara, Brian Chmiel, Moshe Island, Ron Banner, Joseph Naor, Daniel Soudry
ICML 2021 Accurate Post Training Quantization with Small Calibration Sets Itay Hubara, Yury Nahshan, Yair Hanani, Ron Banner, Daniel Soudry
ICLR 2018 Fix Your Classifier: The Marginal Value of Training the Last Weight Layer Elad Hoffer, Itay Hubara, Daniel Soudry
NeurIPS 2018 Scalable Methods for 8-Bit Training of Neural Networks Ron Banner, Itay Hubara, Elad Hoffer, Daniel Soudry
ICLR 2017 Playing SNES in the Retro Learning Environment Nadav Bhonker, Shai Rozenberg, Itay Hubara
NeurIPS 2017 Train Longer, Generalize Better: Closing the Generalization Gap in Large Batch Training of Neural Networks Elad Hoffer, Itay Hubara, Daniel Soudry
NeurIPS 2016 Binarized Neural Networks Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio
NeurIPS 2014 Expectation Backpropagation: Parameter-Free Training of Multilayer Neural Networks with Continuous or Discrete Weights Daniel Soudry, Itay Hubara, Ron Meir