CPAL 2025
55 papers
A Case Study of Low Ranked Self-Expressive Structures in Neural Network Representations
Uday Singh Saini, William Shiao, Yahya Sattar, Yogesh Dahiya, Samet Oymak, Evangelos E. Papalexakis Adversarially Robust Spiking Neural Networks with Sparse Connectivity
Mathias Schmolli, Maximilian Baronig, Robert Legenstein, Ozan Ozdenizci Enhancing Video Representation Learning with Temporal Differentiation
Siyi Chen, Minkyu Choi, Zesen Zhao, Kuan Han, Qing Qu, Zhongming Liu HSR-Enhanced Sparse Attention Acceleration
Bo Chen, Yingyu Liang, Zhizhou Sha, Zhenmei Shi, Zhao Song Meta ControlNet: Enhancing Task Adaptation via Meta Learning
Junjie Yang, Jinze Zhao, Peihao Wang, Zhangyang Wang, Yingbin Liang Progressive Gradient Flow for Robust N:M Sparsity Training in Transformers
Abhimanyu Rajeshkumar Bambhaniya, Amir Yazdanbakhsh, Suvinay Subramanian, Sheng-Chun Kao, Shivani Agrawal, Utku Evci, Tushar Krishna Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients
Zhenyu Zhang, Ajay Kumar Jaiswal, Lu Yin, Shiwei Liu, Jiawei Zhao, Yuandong Tian, Zhangyang Wang Quantum EigenGame for Excited State Calculation
David A. Quiroga, Jason Han, Anastasios Kyrillidis Sparse MoE as a New Treatment: Addressing Forgetting, Fitting, Learning Issues in Multi-Modal Multi-Task Learning
Jie Peng, Sukwon Yun, Kaixiong Zhou, Ruida Zhou, Thomas Hartvigsen, Yanyong Zhang, Zhangyang Wang, Tianlong Chen Streaming Kernel PCA Algorithm with Small Space
Yichuan Deng, Jiangxuan Long, Zhao Song, Zifan Wang, Han Zhang Towards Vector Optimization on Low-Dimensional Vector Symbolic Architecture
Shijin Duan, Yejia Liu, Gaowen Liu, Ramana Rao Kompella, Shaolei Ren, Xiaolin Xu White-Box Error Correction Code Transformer
Ziyan Zheng, Chin Wa Lau, Nian Guo, Xiang Shi, Shao-Lun Huang