Xipeng Qiu(邱锡鹏)
Fudan University
H-index: 56
Asia-China
Top articles of Xipeng Qiu(邱锡鹏)
Title | Journal | Author(s) | Publication Date |
---|---|---|---|
A Survey of Neural Code Intelligence: Paradigms, Advances and Beyond | arXiv preprint arXiv:2403.14734 | Qiushi Sun Zhirui Chen Fangzhi Xu Kanzhi Cheng Chang Ma | 2024/3/21 |
GAOKAO-MM: A Chinese Human-Level Benchmark for Multimodal Models Evaluation | arXiv preprint arXiv:2402.15745 | Yi Zong Xipeng Qiu | 2024/2/24 |
Code Needs Comments: Enhancing Code LLMs with Comment Augmentation | arXiv preprint arXiv:2402.13013 | Demin Song Honglin Guo Yunhua Zhou Shuhao Xing Yudong Wang | 2024/2/20 |
SpeechGPT-Gen: Scaling Chain-of-Information Speech Generation | arXiv preprint arXiv:2401.13527 | Dong Zhang Xin Zhang Jun Zhan Shimin Li Yaqian Zhou | 2024/1/24 |
MouSi: Poly-Visual-Expert Vision-Language Models | arXiv preprint arXiv:2401.17221 | Xiaoran Fan Tao Ji Changhao Jiang Shuo Li Senjie Jin | 2024/1/30 |
SpeechAlign: Aligning Speech Generation to Human Preferences | arXiv preprint arXiv:2404.05600 | Dong Zhang Zhaowei Li Shimin Li Xin Zhang Pengyu Wang | 2024/4/8 |
Benchmarking Hallucination in Large Language Models based on Unanswerable Math Word Problem | arXiv preprint arXiv:2403.03558 | Yuhong Sun Zhangyue Yin Qipeng Guo Jiawen Wu Xipeng Qiu | 2024/3/6 |
Balanced Data Sampling for Language Model Training with Clustering | arXiv preprint arXiv:2402.14526 | Yunfan Shao Linyang Li Zhaoye Fei Hang Yan Dahua Lin | 2024/2/22 |
AnyGPT: Unified Multimodal LLM with Discrete Sequence Modeling | arXiv preprint arXiv:2402.12226 | Jun Zhan Junqi Dai Jiasheng Ye Yunhua Zhou Dong Zhang | 2024/2/19 |
Inferaligner: Inference-time alignment for harmlessness through cross-model guidance | arXiv preprint arXiv:2401.11206 | Pengyu Wang Dong Zhang Linyang Li Chenkun Tan Xinghao Wang | 2024/1/20 |
F-Eval: Asssessing Fundamental Abilities with Refined Evaluation Methods | arXiv preprint arXiv:2401.14869 | Yu Sun Keyu Chen Shujie Wang Qipeng Guo Hang Yan | 2024/1/26 |
Calibrating the Confidence of Large Language Models by Eliciting Fidelity | arXiv preprint arXiv:2404.02655 | Mozhi Zhang Mianqiu Huang Rundong Shi Linsen Guo Chong Peng | 2024/4/3 |
In-Memory Learning: A Declarative Learning Framework for Large Language Models | arXiv preprint arXiv:2403.02757 | Bo Wang Tianxiang Sun Hang Yan Siyin Wang Qingyuan Cheng | 2024/3/5 |
Hint-before-Solving Prompting: Guiding LLMs to Effectively Utilize Encoded Knowledge | arXiv preprint arXiv:2402.14310 | Jinlan Fu Shenzhen Huangfu Hang Yan See-Kiong Ng Xipeng Qiu | 2024/2/22 |
Dictionary Learning Improves Patch-Free Circuit Discovery in Mechanistic Interpretability: A Case Study on Othello-GPT | arXiv preprint arXiv:2402.12201 | Zhengfu He Xuyang Ge Qiong Tang Tianxiang Sun Qinyuan Cheng | 2024/2/19 |
Secrets of rlhf in large language models part ii: Reward modeling | arXiv preprint arXiv:2401.06080 | Binghai Wang Rui Zheng Lu Chen Yan Liu Shihan Dou | 2024/1/11 |
Query of CC: Unearthing Large Scale Domain-Specific Knowledge from Public Corpora | arXiv preprint arXiv:2401.14624 | Zhaoye Fei Yunfan Shao Linyang Li Zhiyuan Zeng Hang Yan | 2024/1/26 |
Turn Waste into Worth: Rectifying Top- Router of MoE | arXiv preprint arXiv:2402.12399 | Zhiyuan Zeng Qipeng Guo Zhaoye Fei Zhangyue Yin Yunhua Zhou | 2024/2/17 |
Internlm2 technical report | arXiv preprint arXiv:2403.17297 | Zheng Cai Maosong Cao Haojiong Chen Kai Chen Keyu Chen | 2024/3/26 |
Training-Free Long-Context Scaling of Large Language Models | arXiv preprint arXiv:2402.17463 | Chenxin An Fei Huang Jun Zhang Shansan Gong Xipeng Qiu | 2024/2/27 |