Portrait
Jun Bai (白骏)
Assistant Researcher
BIGAI NLCo Lab
About Me

I received my Ph.D. degree from the School of Computer Science and Engineering at Beihang University, China. I completed my doctoral studies under the supervision of Prof. Wenge Rong and Prof. Chuantao Yin.

My research primarily focuses on building Trustworthy AI systems. My current research themes include:

  • Investigating and Improving the Faithfulness of LLMs.
  • Exploring and Enhancing the CoT Monitorability of LRMs.
  • Understanding the Internal Workings of LLMs through Mechanistic Interpretability.
Education
  • Beihang University
    Beihang University
    Ph.D. in Computer Science
    Sep. 2020 - Nov. 2024
Employment
  • BIGAI NLCo Lab
    BIGAI NLCo Lab
    Assistant Researcher
    Dec. 2024 - present
News
2025
🚀 We are excited to announce the release of the Native Parallel Reasoner.
Dec 09
RouterLens, TongSearch-QR, and CogAtom are accepted by EMNLP 2025.
Aug 20
CLG has been accepted by ACL 2025.
May 15
Selected Publications (view all )
Native Parallel Reasoner: Reasoning in Parallelism via Self-Distilled Reinforcement Learning
Native Parallel Reasoner: Reasoning in Parallelism via Self-Distilled Reinforcement Learning

Tong Wu*, Yang Liu*, Jun Bai*, Zixia Jia, Shuyi Zhang, Ziyong Lin, Yanting Wang, Song-Chun Zhu, Zilong Zheng (* equal contribution)

Preprint 2025

Native Parallel Reasoner: Reasoning in Parallelism via Self-Distilled Reinforcement Learning

Tong Wu*, Yang Liu*, Jun Bai*, Zixia Jia, Shuyi Zhang, Ziyong Lin, Yanting Wang, Song-Chun Zhu, Zilong Zheng (* equal contribution)

Preprint 2025

Understanding and Leveraging the Expert Specialization of Context Faithfulness in Mixture-of-Experts LLMs
Understanding and Leveraging the Expert Specialization of Context Faithfulness in Mixture-of-Experts LLMs

Jun Bai, Minghao Tong, Yang Liu, Zixia Jia, Zilong Zheng

EMNLP 2025 Top 2%

Understanding and Leveraging the Expert Specialization of Context Faithfulness in Mixture-of-Experts LLMs

Jun Bai, Minghao Tong, Yang Liu, Zixia Jia, Zilong Zheng

EMNLP 2025 Top 2%

Reinforced Query Reasoners for Reasoning-intensive Retrieval Tasks
Reinforced Query Reasoners for Reasoning-intensive Retrieval Tasks

Xubo Qin, Jun Bai, Jiaqi Li, Zixia Jia, Zilong Zheng

EMNLP 2025

Reinforced Query Reasoners for Reasoning-intensive Retrieval Tasks

Xubo Qin, Jun Bai, Jiaqi Li, Zixia Jia, Zilong Zheng

EMNLP 2025

Leveraging Estimated Transferability over Human Intuition for Model Selection in Text Ranking
Leveraging Estimated Transferability over Human Intuition for Model Selection in Text Ranking

Jun Bai, Zhuofan Chen, Zhenzi Li, Hanhua Hong, Jianfei Zhang, Chen Li, Chenghua Lin, Wenge Rong

EMNLP 2024

Leveraging Estimated Transferability over Human Intuition for Model Selection in Text Ranking

Jun Bai, Zhuofan Chen, Zhenzi Li, Hanhua Hong, Jianfei Zhang, Chen Li, Chenghua Lin, Wenge Rong

EMNLP 2024

How to Determine the Most Powerful Pre-trained Language Model without Brute Force Fine-tuning? An Empirical Survey
How to Determine the Most Powerful Pre-trained Language Model without Brute Force Fine-tuning? An Empirical Survey

Jun Bai, Xiaofeng Zhang, Chen Li, Hanhua Hong, Xi Xu, Chenghua Lin, Wenge Rong

EMNLP Findings 2023

How to Determine the Most Powerful Pre-trained Language Model without Brute Force Fine-tuning? An Empirical Survey

Jun Bai, Xiaofeng Zhang, Chen Li, Hanhua Hong, Xi Xu, Chenghua Lin, Wenge Rong

EMNLP Findings 2023

All publications