Atashgahi, Zahra

10 publications

ECML-PKDD 2024 Adaptive Sparsity Level During Training for Efficient Time Series Forecasting with Transformers Zahra Atashgahi, Mykola Pechenizkiy, Raymond N. J. Veldhuis, Decebal Constantin Mocanu
AISTATS 2024 Supervised Feature Selection via Ensemble Gradient Information from Sparse Neural Networks Kaiting Liu, Zahra Atashgahi, Ghada Sokar, Mykola Pechenizkiy, Decebal Constantin Mocanu
IJCAI 2023 Cost-Effective Artificial Neural Networks Zahra Atashgahi
TMLR 2023 Supervised Feature Selection with Neuron Evolution in Sparse Neural Networks Zahra Atashgahi, Xuhao Zhang, Neil Kichler, Shiwei Liu, Lu Yin, Mykola Pechenizkiy, Raymond Veldhuis, Decebal Constantin Mocanu
MLJ 2022 A Brain-Inspired Algorithm for Training Highly Sparse Neural Networks Zahra Atashgahi, Joost Pieterse, Shiwei Liu, Decebal Constantin Mocanu, Raymond N. J. Veldhuis, Mykola Pechenizkiy
ICLR 2022 Deep Ensembling with No Overhead for Either Training or Testing: The All-Round Blessings of Dynamic Sparsity Shiwei Liu, Tianlong Chen, Zahra Atashgahi, Xiaohan Chen, Ghada Sokar, Elena Mocanu, Mykola Pechenizkiy, Zhangyang Wang, Decebal Constantin Mocanu
MLJ 2022 Quick and Robust Feature Selection: The Strength of Energy-Efficient Sparse Training for Autoencoders Zahra Atashgahi, Ghada Sokar, Tim van der Lee, Elena Mocanu, Decebal Constantin Mocanu, Raymond N. J. Veldhuis, Mykola Pechenizkiy
NeurIPS 2022 Where to Pay Attention in Sparse Training for Feature Selection? Ghada Sokar, Zahra Atashgahi, Mykola Pechenizkiy, Decebal Constantin Mocanu
NeurIPS 2021 Sparse Training via Boosting Pruning Plasticity with Neuroregeneration Shiwei Liu, Tianlong Chen, Xiaohan Chen, Zahra Atashgahi, Lu Yin, Huanyu Kou, Li Shen, Mykola Pechenizkiy, Zhangyang Wang, Decebal Constantin Mocanu
ECML-PKDD 2020 Topological Insights into Sparse Neural Networks Shiwei Liu, Tim van der Lee, Anil Yaman, Zahra Atashgahi, Davide Ferraro, Ghada Sokar, Mykola Pechenizkiy, Decebal Constantin Mocanu