Ye, Haishan

22 publications

NeurIPS 2025 A Near-Optimal Algorithm for Decentralized Convex-Concave Finite-Sum Minimax Optimization Hongxu Chen, Ke Wei, Haishan Ye, Luo Luo
ICLR 2025 ProAdvPrompter: A Two-Stage Journey to Effective Adversarial Prompting for LLMs Hao Di, Tong He, Haishan Ye, Yinghui Huang, Xiangyu Chang, Guang Dai, Ivor Tsang
ICLR 2025 Second-Order Fine-Tuning Without Pain for LLMs: A Hessian Informed Zeroth-Order Optimizer Yanjun Zhao, Sizhe Dang, Haishan Ye, Guang Dai, Yi Qian, Ivor Tsang
AISTATS 2024 An Efficient Stochastic Algorithm for Decentralized Nonconvex-Strongly-Concave Minimax Optimization Lesi Chen, Haishan Ye, Luo Luo
ICML 2024 Can Gaussian Sketching Converge Faster on a Preconditioned Landscape? Yilong Wang, Haishan Ye, Guang Dai, Ivor Tsang
ICLR 2024 Decentralized Riemannian Conjugate Gradient Method on the Stiefel Manifold Jun Chen, Haishan Ye, Mengmeng Wang, Tianxin Huang, Guang Dai, Ivor Tsang, Yong Liu
ICML 2024 Double Stochasticity Gazes Faster: Snap-Shot Decentralized Stochastic Gradient Tracking Methods Hao Di, Haishan Ye, Xiangyu Chang, Guang Dai, Ivor Tsang
ICML 2024 Double Variance Reduction: A Smoothing Trick for Composite Optimization Problems Without First-Order Gradient Hao Di, Haishan Ye, Yueling Zhang, Xiangyu Chang, Guang Dai, Ivor Tsang
NeurIPS 2024 Near-Optimal Distributed Minimax Optimization Under the Second-Order Similarity Qihao Zhou, Haishan Ye, Luo Luo
JMLR 2023 Multi-Consensus Decentralized Accelerated Gradient Descent Haishan Ye, Luo Luo, Ziang Zhou, Tong Zhang
NeurIPS 2023 Stochastic Distributed Optimization Under Average Second-Order Similarity: Algorithms and Analysis Dachao Lin, Yuze Han, Haishan Ye, Zhihua Zhang
ICLR 2022 Eigencurve: Optimal Learning Rate Schedule for SGD on Quadratic Objectives with Skewed Hessian Spectrums Rui Pan, Haishan Ye, Tong Zhang
JMLR 2022 Explicit Convergence Rates of Greedy and Random Quasi-Newton Methods Dachao Lin, Haishan Ye, Zhihua Zhang
JMLR 2021 Approximate Newton Methods Haishan Ye, Luo Luo, Zhihua Zhang
JMLR 2021 DeEPCA: Decentralized Exact PCA with Linear Convergence Rate Haishan Ye, Tong Zhang
NeurIPS 2021 Greedy and Random Quasi-Newton Methods with Faster Explicit Superlinear Convergence Dachao Lin, Haishan Ye, Zhihua Zhang
AAAI 2021 Revisiting Co-Occurring Directions: Sharper Analysis and Efficient Algorithm for Sparse Matrices Luo Luo, Cheng Chen, Guangzeng Xie, Haishan Ye
NeurIPS 2020 Decentralized Accelerated Proximal Gradient Descent Haishan Ye, Ziang Zhou, Luo Luo, Tong Zhang
JMLR 2020 Nesterov's Acceleration for Approximate Newton Haishan Ye, Luo Luo, Zhihua Zhang
NeurIPS 2020 Stochastic Recursive Gradient Descent Ascent for Stochastic Nonconvex-Strongly-Concave Minimax Problems Luo Luo, Haishan Ye, Zhichao Huang, Tong Zhang
ICML 2017 Approximate Newton Methods and Their Local Convergence Haishan Ye, Luo Luo, Zhihua Zhang
AAAI 2016 Accelerating Random Kaczmarz Algorithm Based on Clustering Information Yujun Li, Kaichun Mo, Haishan Ye