Dai, Josef

7 publications

NeurIPS 2025 InterMT: Multi-Turn Interleaved Preference Alignment with Human Feedback Boyuan Chen, Donghai Hong, Jiaming Ji, Jiacheng Zheng, Bowen Dong, Jiayi Zhou, Kaile Wang, Josef Dai, Xuyao Wang, Wenqi Chen, Qirui Zheng, Wenxin Li, Sirui Han, Yike Guo, Yaodong Yang
NeurIPS 2025 SafeVLA: Towards Safety Alignment of Vision-Language-Action Model via Constrained Learning Borong Zhang, Yuhao Zhang, Jiaming Ji, Yingshan Lei, Josef Dai, Yuanpei Chen, Yaodong Yang
AAAI 2025 Sequence to Sequence Reward Modeling: Improving RLHF by Language Feedback Jiayi Zhou, Jiaming Ji, Josef Dai, Yaodong Yang
NeurIPSW 2024 Language Models Resist Alignment Jiaming Ji, Kaile Wang, Tianyi Qiu, Boyuan Chen, Changye Li, Hantao Lou, Jiayi Zhou, Josef Dai, Yaodong Yang
ICLR 2024 Safe RLHF: Safe Reinforcement Learning from Human Feedback Josef Dai, Xuehai Pan, Ruiyang Sun, Jiaming Ji, Xinbo Xu, Mickel Liu, Yizhou Wang, Yaodong Yang
NeurIPS 2023 BeaverTails: Towards Improved Safety Alignment of LLM via a Human-Preference Dataset Jiaming Ji, Mickel Liu, Josef Dai, Xuehai Pan, Chi Zhang, Ce Bian, Boyuan Chen, Ruiyang Sun, Yizhou Wang, Yaodong Yang
NeurIPS 2023 Safety Gymnasium: A Unified Safe Reinforcement Learning Benchmark Jiaming Ji, Borong Zhang, Jiayi Zhou, Xuehai Pan, Weidong Huang, Ruiyang Sun, Yiran Geng, Yifan Zhong, Josef Dai, Yaodong Yang