Chen, Tianlong
157 publications
NeurIPS
2025
$\texttt{BetaConform}$: Efficient MAP Estimation of LLM Ensemble Judgment Performance with Prior Transfer
ICLR
2025
Adapt-$\infty$: Scalable Continual Multimodal Instruction Tuning via Dynamic Data Selection
NeurIPS
2025
BrainMoE: Cognition Joint Embedding via Mixture-of-Expert Towards Robust Brain Foundation Model
NeurIPS
2025
IndustryEQA: Pushing the Frontiers of Embodied Question Answering in Industrial Scenarios
NeurIPS
2025
Mozart: Modularized and Efficient MoE Training on 3.5d Wafer-Scale Chiplet Architectures
ICLR
2025
PortLLM: Personalizing Evolving Large Language Models with Training-Free and Portable Model Patches
AAAI
2025
Sparse Transfer Learning Accelerates and Enhances Certified Robustness: A Comprehensive Study
ICML
2024
$\texttt{MoE-RBench}$: Towards Building Reliable Language Models with Sparse Mixture-of-Experts
NeurIPS
2024
GDeR: Safeguarding Efficiency, Balancing, and Robustness via Prototypical Graph Pruning
NeurIPS
2024
GTBench: Uncovering the Strategic Reasoning Capabilities of LLMs via Game-Theoretic Evaluations
CVPR
2024
Molecular Data Programming: Towards Molecule Pseudo-Labeling with Systematic Weak Supervision
NeurIPSW
2024
Sparse Transfer Learning Accelerates and Enhances Certified Robustness: A Comprehensive Study
AAAI
2024
Sparsity-Guided Holistic Explanation for LLMs with Interpretable Inference-Time Intervention
ICML
2024
Two Heads Are Better than One: Boosting Graph Sparse Training via Semantic and Topological Awareness
TMLR
2024
Unlearning Sensitive Information in Multimodal LLMs: Benchmark and Attack-Defense Evaluation
ICCV
2023
Enhancing NeRF Akin to Enhancing LLMs: Generalizable NeRF Transformer with Mixture-of-View-Experts
ICML
2023
Instant Soup: Cheap Pruning Ensembles in a Single Pass Can Draw Lottery Tickets from Large Models
NeurIPS
2023
The Emergence of Essential Sparsity in Large Pre-Trained Models: The Weights That Matter
ICLR
2022
Bayesian Modeling and Uncertainty Quantification for Learning to Optimize: What, Why, and How
ICLR
2022
Learning Pruning-Friendly Networks via Frank-Wolfe: One-Shot, Any-Sparsity, and No Retraining
NeurIPS
2022
M³ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-Task Learning with Model-Accelerator Co-Design
NeurIPS
2022
Randomized Channel Shuffling: Minimal-Overhead Backdoor Attack Detection Without Clean Datasets
WACV
2022
Sandwich Batch Normalization: A Drop-in Replacement for Feature Distribution Heterogeneity
NeurIPS
2021
You Are Caught Stealing My Winning Lottery Ticket! Making a Lottery Ticket Claim Its Ownership
WACV
2020
Calibrated Domain-Invariant Learning for Highly Generalizable Large Scale Re-Identification
CVPRW
2020
Focus Longer to See Better: Recursively Refined Attention for Fine-Grained Image Classification
NeurIPS
2020
Once-for-All Adversarial Training: In-Situ Tradeoff Between Robustness and Accuracy for Free