👻 About Me

Zheng Chu is a 4th-year doctoral student at the Research Center for Social Computing and Information Retrieval, Harbin Institute of Technology (HIT-SCIR), supervised by Prof. Bing Qin and Prof. Ming Liu. He is expected to graduate in 2027. He is currently a REDStar Intern with the Post-Training Team at Xiaohongshu Hi Lab, focusing on agentic post-training and deep research agents.

🚀 He is actively seeking internship opportunities in 2026 and full-time industry opportunities for Spring/Fall 2027.

His research focuses on Large Language Models and LLM-based Agents:

  • Deep-Research Agents
  • Long-horizon Reasoning
  • Agentic Reinforcement Learning
  • Complex Task Synthesis

📰 News

  1. We released REDSearcher, A Scalable and Cost-Efficient Framework for Long-Horizon Search Agents. 🏠[Project] 💻[GitHub] 🤗[Collections]
  2. Joined the Post-Training Team at Xiaohongshu Hi Lab, focusing on agent post-training and deep-search agents.

📝 Publications

Auto-filled by CodeX from Google Scholar profile data, synced on 2026-03-17.

2026

  • Chu, Zheng, Wang Xiao, Hong Jack, Fan, Huiming, Huang, Yuqi, Yang, Yue, Xu, Guohai, Zhao, Chenxiao, Xiang, Cheng, Hu, Shengchao, and others. “REDSearcher: A Scalable and Cost-Efficient Framework for Long-Horizon Search Agents.” 🔗[Paper] [Search Agent]

  • Huang, Wenxuan, Zeng, Yu, Wang, Qiuchen, Fang, Zhen, Cao, Shaosheng, Chu, Zheng, He, Xinyang, Chen, Shuang, Yin, Zhenfei, Chen, Lin, and others. “Vision-DeepResearch: Incentivizing DeepResearch Capability in Multimodal Large Language Models.” 🔗[Paper] [Search Agent]

  • Tian, Xueyun, Ma, Minghua, Xu, Bingbing, Lyu, Nuoyan, Li, Wei, Dong, Heng, Chu, Zheng, Wang, Yuanzhuo, and Shen, Huawei. “Learning from Mistakes: Negative Reasoning Samples Enhance Out-of-Domain Generalization.” 🔗[Paper] [Reasoning]

  • Du, Yexing, Pan, Youcheng, Wang, Zekun, Chu, Zheng, Huang, Yichong, Liu, Kaiyuan, Yang, Bo, Xiang, Yang, Liu, Ming, and Qin, Bing. “Scalable Multilingual Multimodal Machine Translation with Speech-Text Fusion.” 🔗[Scholar]

2025

  • Chu, Zheng, Fan, Huiming, Chen, Jingchang, Wang, Qianyu, Yang, Mingda, Liang, Jiafeng, Wang, Zhongjie, Li, Hao, Tang, Guoan, Liu, Ming, and Qin, Bing. “Self-critique guided iterative reasoning for multi-hop question answering.” 🔗[Paper] [Search Agent]

  • Chu, Zheng, Chen, Jingchang, Wang, Zhongjie, Tang, Guo, Chen, Qianglong, Liu, Ming, and Qin, Bing. “Towards faithful multi-step reasoning through fine-grained causal-aware attribution reasoning distillation.” 🔗[Paper] [Reasoning]

  • Li, Hao, Chu, Zheng, Liang, Jiafeng, Wang, Yuxin, Tang, Wei, Mao, Xun, Lv, Kai, Chen, Lei, Liu, Ming, and Qin, Bing. “PRAE: Progressive Retrieval-Augmented Dynamic Knowledge Editing for Large Language Models.” 🔗[Paper] [Multi-hop RAG]

  • Tang, Guo, Chu, Zheng, Zheng, Wenxiang, Xiang, Jianbin, Li, Yizhuo, Zhang, Weihao, Liu, Ming, and Qin, Bing. “AnRe: Analogical Replay for Temporal Knowledge Graph Forecasting.” 🔗[Paper]

  • Du, Yexing, Liu, Kaiyuan, Pan, Youcheng, Chu, Zheng, Yang, Bo, Feng, Xiaocheng, Xiang, Yang, and Liu, Ming. “Ccfqa: A benchmark for cross-lingual and cross-modal speech and text factuality evaluation.” 🔗[Paper] [Benchmark]

2024

  • Chu, Zheng, Chen, Jingchang, Chen, Qianglong, Yu, Weijiang, He, Tao, Wang, Haotian, Peng, Weihua, Liu, Ming, Qin, Bing, and Liu, Zhiyuan. “Navigate through Enigmatic Labyrinth A Survey of Chain of Thought Reasoning: Advances, Frontiers and Future.” 🔗[Paper] [Reasoning]

  • Chu, Zheng, Chen, Jingchang, Chen, Qianglong, Yu, Weijiang, Wang, Haotian, Liu, Ming, and Qin, Bing. “TimeBench: A Comprehensive Evaluation of Temporal Reasoning Abilities in Large Language Models.” 🔗[Paper] [Benchmark]

  • Chu, Zheng, Chen, Jingchang, Chen, Qianglong, Wang, Haotian, Zhu, Kun Yan, Du, Xiyuan, Yu, Weijiang, Liu, M., and Qin, Bing. “BeamAggR: Beam aggregation reasoning over multi-source knowledge for multi-hop question answering.” 🔗[Paper] [Multi-hop RAG]

  • Chen, Jingchang, Tang, Hongxuan, Chu, Zheng, Chen, Qianglong, Wang, Zekun, Liu, Ming, and Qin, Bing. “Divide-and-conquer meets consensus: Unleashing the power of functions in code generation.” 🔗[Paper] [Code Generation]

  • Tang, Guo, Chu, Zheng, Zheng, Wenxiang, Liu, Ming, and Qin, Bing. “Towards benchmarking situational awareness of large language models: Comprehensive benchmark, evaluation and analysis.” 🔗[Paper] [Benchmark]

2023

  • Chu, Zheng, Wang, Zekun, Liang, Jiafeng, Liu, Ming, and Qin, Bing. “MTGER: Multi-view Temporal Graph Enhanced Temporal Reasoning over Time-Involved Document.” 🔗[Paper] [Temporal Reasoning]

  • Wang, Haotian, Du, Xiyuan, Yu, Weijiang, Chen, Qianglong, Zhu, Kun, Chu, Zheng, Yan, Lian, and Guan, Yi. “Learning to break: Knowledge-enhanced reasoning in multi-agent debate system” 🔗[Paper] [RAG]

2022

  • Chu, Zheng, Yang, Ziqing, Cui, Yiming, Chen, Zhigang, and Liu, Ming. “Hit at Semeval-2022 Task 2: Pre-trained Language Model for Idioms Detection.” 🔗[Paper] [SemEval]

🔍 Academic Services

Served as a reviewer for conferences including NeurIPS, ICML, ICLR, ACL ARR.

🎓 Educations

  • 2022.09 - 2027.06 (expected), Ph.D., Faculty of Computing, Computer Science, Harbin Institute of Technology.
  • 2018.09 - 2022.06, B.S., Faculty of Computing, Software Engineering, Harbin Institute of Technology.

🏭 Internships

  • 2025.09 - present, RedStar Intern, Xiaohongshu Hi Lab, Beijing, China.
  • 2023.08 - 2024.10, Research Intern, Huawei Co., Ltd, Shenzhen, China.
  • 2021.10 - 2022.04, Research Intern, iFLYTEK, HFL-Joint Lab, Beijing China.