Mocanu, Decebal Constantin

26 publications

ICLR 2025 Dynamic Sparse Training Versus Dense Training: The Unexpected Winner in Image Corruption Robustness Boqian Wu, Qiao Xiao, Shunxin Wang, Nicola Strisciuglio, Mykola Pechenizkiy, Maurice van Keulen, Decebal Constantin Mocanu, Elena Mocanu
TMLR 2025 Sparse-to-Sparse Training of Diffusion Models Inês Cardoso Oliveira, Decebal Constantin Mocanu, Luis A. Leiva
ECML-PKDD 2024 Adaptive Sparsity Level During Training for Efficient Time Series Forecasting with Transformers Zahra Atashgahi, Mykola Pechenizkiy, Raymond N. J. Veldhuis, Decebal Constantin Mocanu
CPAL 2024 Continual Learning with Dynamic Sparse Training: Exploring Algorithms for Effective Model Updates Murat Onur Yildirim, Elif Ceren Gok, Ghada Sokar, Decebal Constantin Mocanu, Joaquin Vanschoren
NeurIPS 2024 E2ENet: Dynamic Sparse Feature Fusion for Accurate and Efficient 3D Medical Image Segmentation Boqian Wu, Qiao Xiao, Shiwei Liu, Lu Yin, Mykola Pechenizkiy, Decebal Constantin Mocanu, Maurice van Keulen, Elena Mocanu
NeurIPSW 2024 LiMTR: Time Series Motion Prediction for Diverse Road Users Through Multimodal Feature Integration Camiel Oerlemans, Bram Grooten, Michiel Braat, Alaa Alassi, Emilia Silvas, Decebal Constantin Mocanu
AISTATS 2024 Supervised Feature Selection via Ensemble Gradient Information from Sparse Neural Networks Kaiting Liu, Zahra Atashgahi, Ghada Sokar, Mykola Pechenizkiy, Decebal Constantin Mocanu
NeurIPS 2023 Fantastic Weights and How to Find Them: Where to Prune in Dynamic Sparse Training Aleksandra Nowak, Bram Grooten, Decebal Constantin Mocanu, Jacek Tabor
ICLR 2023 More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 Using Sparsity Shiwei Liu, Tianlong Chen, Xiaohan Chen, Xuxi Chen, Qiao Xiao, Boqian Wu, Tommi Kärkkäinen, Mykola Pechenizkiy, Decebal Constantin Mocanu, Zhangyang Wang
TMLR 2023 Supervised Feature Selection with Neuron Evolution in Sparse Neural Networks Zahra Atashgahi, Xuhao Zhang, Neil Kichler, Shiwei Liu, Lu Yin, Mykola Pechenizkiy, Raymond Veldhuis, Decebal Constantin Mocanu
MLJ 2022 A Brain-Inspired Algorithm for Training Highly Sparse Neural Networks Zahra Atashgahi, Joost Pieterse, Shiwei Liu, Decebal Constantin Mocanu, Raymond N. J. Veldhuis, Mykola Pechenizkiy
ECML-PKDD 2022 Avoiding Forgetting and Allowing Forward Transfer in Continual Learning via Sparse Networks Ghada Sokar, Decebal Constantin Mocanu, Mykola Pechenizkiy
ICLR 2022 Deep Ensembling with No Overhead for Either Training or Testing: The All-Round Blessings of Dynamic Sparsity Shiwei Liu, Tianlong Chen, Zahra Atashgahi, Xiaohan Chen, Ghada Sokar, Elena Mocanu, Mykola Pechenizkiy, Zhangyang Wang, Decebal Constantin Mocanu
NeurIPS 2022 Dynamic Sparse Network for Time Series Classification: Learning What to “See” Qiao Xiao, Boqian Wu, Yu Zhang, Shiwei Liu, Mykola Pechenizkiy, Elena Mocanu, Decebal Constantin Mocanu
IJCAI 2022 Dynamic Sparse Training for Deep Reinforcement Learning Ghada Sokar, Elena Mocanu, Decebal Constantin Mocanu, Mykola Pechenizkiy, Peter Stone
MLJ 2022 Quick and Robust Feature Selection: The Strength of Energy-Efficient Sparse Training for Autoencoders Zahra Atashgahi, Ghada Sokar, Tim van der Lee, Elena Mocanu, Decebal Constantin Mocanu, Raymond N. J. Veldhuis, Mykola Pechenizkiy
ICLR 2022 The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training Shiwei Liu, Tianlong Chen, Xiaohan Chen, Li Shen, Decebal Constantin Mocanu, Zhangyang Wang, Mykola Pechenizkiy
NeurIPS 2022 Where to Pay Attention in Sparse Training for Feature Selection? Ghada Sokar, Zahra Atashgahi, Mykola Pechenizkiy, Decebal Constantin Mocanu
LoG 2022 You Can Have Better Graph Neural Networks by Not Training Weights at All: Finding Untrained GNNs Tickets Tianjin Huang, Tianlong Chen, Meng Fang, Vlado Menkovski, Jiaxu Zhao, Lu Yin, Yulong Pei, Decebal Constantin Mocanu, Zhangyang Wang, Mykola Pechenizkiy, Shiwei Liu
ICML 2021 Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training Shiwei Liu, Lu Yin, Decebal Constantin Mocanu, Mykola Pechenizkiy
ICML 2021 Selfish Sparse RNN Training Shiwei Liu, Decebal Constantin Mocanu, Yulong Pei, Mykola Pechenizkiy
NeurIPS 2021 Sparse Training via Boosting Pruning Plasticity with Neuroregeneration Shiwei Liu, Tianlong Chen, Xiaohan Chen, Zahra Atashgahi, Lu Yin, Huanyu Kou, Li Shen, Mykola Pechenizkiy, Zhangyang Wang, Decebal Constantin Mocanu
ECML-PKDD 2020 Topological Insights into Sparse Neural Networks Shiwei Liu, Tim van der Lee, Anil Yaman, Zahra Atashgahi, Davide Ferraro, Ghada Sokar, Mykola Pechenizkiy, Decebal Constantin Mocanu
MLJ 2016 A Topological Insight into Restricted Boltzmann Machines Decebal Constantin Mocanu, Elena Mocanu, Phuong H. Nguyen, Madeleine Gibescu, Antonio Liotta
IJCAI 2016 On the Synergy of Network Science and Artificial Intelligence Decebal Constantin Mocanu
ECML-PKDD 2013 Automatically Mapped Transfer Between Reinforcement Learning Tasks via Three-Way Restricted Boltzmann Machines Haitham Bou-Ammar, Decebal Constantin Mocanu, Matthew E. Taylor, Kurt Driessens, Karl Tuyls, Gerhard Weiss