Huang, Tianjin

21 publications

ICLR 2025 Composable Interventions for Language Models Arinbjörn Kolbeinsson, Kyle O'Brien, Tianjin Huang, Shanghua Gao, Shiwei Liu, Jonathan Richard Schwarz, Anurag Jayant Vaidya, Faisal Mahmood, Marinka Zitnik, Tianlong Chen, Thomas Hartvigsen
ICLR 2025 Enhancing Robust Fairness via Confusional Spectral Regularization Gaojie Jin, Sihao Wu, Jiaxu Liu, Tianjin Huang, Ronghui Mu
ICML 2025 LIFT the Veil for the Truth: Principal Weights Emerge After Rank Reduction for Reasoning-Focused Supervised Fine-Tuning Zihang Liu, Tianyu Pang, Oleg Balabanov, Chaoqun Yang, Tianjin Huang, Lu Yin, Yaoqing Yang, Shiwei Liu
TMLR 2025 Pushing the Limits of Sparsity: A Bag of Tricks for Extreme Pruning Andy Li, Aiden Durrant, Milan Markovic, Tianjin Huang, Souvik Kundu, Tianlong Chen, Lu Yin, Georgios Leontidis
NeurIPS 2025 REOBench: Benchmarking Robustness of Earth Observation Foundation Models Xiang Li, Yong Tao, Siyuan Zhang, Siwei Liu, Zhitong Xiong, Chunbo Luo, Lu Liu, Mykola Pechenizkiy, Xiao Xiang Zhu, Tianjin Huang
ICLR 2025 SPAM: Spike-Aware Adam with Momentum Reset for Stable LLM Training Tianjin Huang, Ziquan Zhu, Gaojie Jin, Lu Liu, Zhangyang Wang, Shiwei Liu
ICLRW 2025 Spam: Spike-Aware Adam with Momentum Reset for Stable LLM Training Tianjin Huang, Ziquan Zhu, Gaojie Jin, Lu Liu, Zhangyang Wang, Shiwei Liu
ICLRW 2025 Stable-SPAM: How to Train in 4-Bit More Stably than 16-Bit Adam Tianjin Huang, Haotian Hu, Zhenyu Zhang, Gaojie Jin, Xiang Li, Li Shen, Tianlong Chen, Lu Liu, Qingsong Wen, Zhangyang Wang, Shiwei Liu
AAAI 2025 Visual Prompting Upgrades Neural Network Sparsification: A Data-Model Perspective Can Jin, Tianjin Huang, Yihua Zhang, Mykola Pechenizkiy, Sijia Liu, Shiwei Liu, Tianlong Chen
ICLRW 2024 Composing Knowledge and Compression Interventions for Language Models Arinbjörn Kolbeinsson, Tianjin Huang, Shanghua Gao, Shiwei Liu, Jonathan Richard Schwarz, Anurag Jayant Vaidya, Faisal Mahmood, Marinka Zitnik, Tianlong Chen, Thomas Hartvigsen
ICML 2023 Are Large Kernels Better Teachers than Transformers for ConvNets? Tianjin Huang, Lu Yin, Zhenyu Zhang, Li Shen, Meng Fang, Mykola Pechenizkiy, Zhangyang Wang, Shiwei Liu
NeurIPS 2023 Dynamic Sparsity Is Channel-Level Sparsity Learner Lu Yin, Gen Li, Meng Fang, Li Shen, Tianjin Huang, Zhangyang "Atlas" Wang, Vlado Menkovski, Xiaolong Ma, Mykola Pechenizkiy, Shiwei Liu
ECML-PKDD 2023 Enhancing Adversarial Training via Reweighting Optimization Trajectory Tianjin Huang, Shiwei Liu, Tianlong Chen, Meng Fang, Li Shen, Vlado Menkovski, Lu Yin, Yulong Pei, Mykola Pechenizkiy
AAAI 2023 Lottery Pools: Winning More by Interpolating Tickets Without Increasing Training or Inference Cost Lu Yin, Shiwei Liu, Meng Fang, Tianjin Huang, Vlado Menkovski, Mykola Pechenizkiy
ICLR 2023 Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together! Shiwei Liu, Tianlong Chen, Zhenyu Zhang, Xuxi Chen, Tianjin Huang, Ajay Kumar Jaiswal, Zhangyang Wang
ECML-PKDD 2022 Hop-Count Based Self-Supervised Anomaly Detection on Attributed Networks Tianjin Huang, Yulong Pei, Vlado Menkovski, Mykola Pechenizkiy
MLJ 2022 ResGCN: Attention-Based Deep Residual Modeling for Anomaly Detection on Attributed Networks Yulong Pei, Tianjin Huang, Werner van Ipenburg, Mykola Pechenizkiy
UAI 2022 Superposing Many Tickets into One: A Performance Booster for Sparse Neural Network Training Lu Yin, Vlado Menkovski, Meng Fang, Tianjin Huang, Yulong Pei, Mykola Pechenizkiy
LoG 2022 You Can Have Better Graph Neural Networks by Not Training Weights at All: Finding Untrained GNNs Tickets Tianjin Huang, Tianlong Chen, Meng Fang, Vlado Menkovski, Jiaxu Zhao, Lu Yin, Yulong Pei, Decebal Constantin Mocanu, Zhangyang Wang, Mykola Pechenizkiy, Shiwei Liu
ACML 2021 Calibrated Adversarial Training Tianjin Huang, Vlado Menkovski, Yulong Pei, Mykola Pechenizkiy
ECML-PKDD 2021 On Generalization of Graph Autoencoders with Adversarial Training Tianjin Huang, Yulong Pei, Vlado Menkovski, Mykola Pechenizkiy