Haowei Lin(林昊苇)

haowei.jpg

E-mail: linhaowei (at) pku (dot) edu (dot) cn

I am a second year Ph.D. student at the Institute for Artificial Intelligence, Peking University, co-advised by Prof. Yitao Liang and Prof. Jianzhu Ma.

I received my Bachelor’s degree in Artificial Intelligence from Yuanpei College, Peking University, where I was fortunate to work with Prof. Bing Liu on OOD detection, continual learning and NLP. We are the first to propose the task of continual pre-training (EMNLP22, ICLR23), and study the theoretical equivalence between OOD detection and continual learning (EMNLP23, ICLR24).

I am passionate about designing next-generation AI that deeply integrates into real world. My primary research focus is in the field of machine learning, with specific interests in Generative Foundation Models (LLM scaling law, 3D autoregressive model, training-free diffusion guidance, discrete flow matching). Currently, I am working on LLM for scientific discovery (e.g., physical law discovery) and complex reasoning (open-world game agent, multi-turn reasoning).

I am a member of Team CraftJarvis, which is dedicated to creating generalist agents for open-world environments. Outside of my professional interests, I enjoy engaging in music-related activities, including singing, playing the guitar, and participating in choirs.

News

May 21, 2025 I’m contributing to the open-source project OpenEvolve, a community implementation of AlphaEvolve—a scientific discovery agent from DeepMind designed to develop better algorithms for open problems. Check out its performance on Symbolic Regression benchmarks!
Dec 1, 2024 Talk on “Unified Training-Free Guidance for Diffusion Models” at NeurIPS 2024 paper sharing session, 机器之心. [video]
Sep 26, 2024 TFG and OmniJARVIS have been accepted at NeurIPS 2024! In TFG, we present a unified training-free guidance method for diffusion models, evaluated across 16 tasks spanning image, audio, and geometry domains. OmniJARVIS, developed in collaboration with Team CraftJARVIS, is an end-to-end VLA (Vision-Language-Action) agent for open-world Minecraft.
May 3, 2024 I will present our new paper Selecting Large Language Model to Fine-tune via Rectified Scaling Law at ICLR 2024 in ME-FoMo workshop. This paper is selected as an oral presentation and is recently accepted by ICML 2024. See you in Vienna!
Jan 16, 2024 Our paper on continual learning has been accepted at ICLR 2024! We propose a theoretically principled and empirically effective method for CL. Feel free to explore our code and paper. This research was conducted during my undergraduate studies under the guidance of Prof. Bing Liu.

Selected Publications

For a complete list of publications, please refer to my Google Scholar page.

(*: Equal Contribution)

MCU: An Evaluation Framework for Open-Ended Game Agents
Xinyue Zheng*, Haowei Lin*, Kaichen He, Zihao Wang, Zilong Zheng, Qiang Fu, Haobo Fu, Yitao Liang
In ICML 2025 (Spotlight).
Peptide Design through Binding Interface Mimicry
Xiangzhe Kong, Rui Jiao, Haowei Lin, Ruihan Guo, Wenbing Huang, Wei-Ying Ma, Zihua Wang, Yang Liu, Jianzhu Ma
In Nature Biomedical Engineering.
Generative Evaluation of Complex Reasoning in Large Language Models
Haowei Lin, Xiangyu Wang, Ruilin Yan, Baizhou Huang, Haotian Ye, Jianhua Zhu, Zihao Wang, James Zou, Jianzhu Ma, Yitao Liang
In Arxiv:2504.02810.
Uni-3DAR: Unified 3D Generation and Understanding via Autoregression on Compressed Spatial Tokens
Shuqi Lu*, Haowei Lin*, Lin Yao*, Zhifeng Gao, Xiaohong Ji, Yitao Liang, Weinan E, Linfeng Zhang, Guolin Ke
In Arxiv:2503.16278.
A Neural Symbolic Model for Space Physics
Jie Ying*, Haowei Lin*, Chao Yue*, Yajie Chen, Chao Xiao, Quanqi Shi, Yitao Liang, Shing-Tung Yau, Yuan Zhou, Jianzhu Ma
In Arxiv.
TFG-Flow: Training-free Guidance in Multimodal Generative Flow
Haowei Lin*, Shanda Li*, Haotian Ye, Yiming Yang, Stafeno Ermon, Yitao Liang, Jianzhu Ma
In ICLR 2025.
TFG: Unified Training-Free Guidance for Diffusion Models
Haotian Ye*, Haowei Lin*, Jiaqi Han*, Minkai Xu, Sheng Liu, Yitao Liang, Jianzhu Ma, James Zou, Stefano Ermon
In NeurIPS 2024 (Spotlight).
Group retrosynthetic planning as neurosymbolic programming
Xuefeng Zhang, Haowei Lin, Muhan Zhang, Yuan Zhou, and Jianzhu Ma
In Nature Communications 2024.
Selecting Large Language Model to Fine-tune via Rectified Scaling Law
Haowei Lin*, Baizhou Huang*, Haotian Ye*, Qinyue Chen, Zihao Wang, Sujian Li, Jianzhu Ma, Xiaojun Wan, James Zou, Yitao Liang
In ICML 2024 (also an oral presentation in ME-FoMo 2024).
Class Incremental Learning via Likelihood Ratio Based Task Prediction
Haowei Lin, Yijia Shao, Weinan Qian, Ningxin Pan, Yiduo Guo, Bing Liu
In ICLR 2024.
Continual Pre-training of Language Models
Zixuan Ke*, Yijia Shao*, Haowei Lin*, Tatsuya Konishi, Gyuhak Kim, Bing Liu
In ICLR 2023.
FLatS: Principled Out-of-Distribution Detection with Feature-Based Likelihood Ratio Score
Haowei Lin, Yuntian Gu
In EMNLP 2023.
Continual Training of Language Models for Few-Shot Learning
Zixuan Ke, Haowei Lin, Yijia Shao, Hu Xu, Lei Shu, Bing Liu
In EMNLP 2022.

Selected Awards

  • Outstanding Reviewer, ICML2022.
  • National Scholarship (top 1%), 2022.
  • The First Prize of Peking University Scholarship (top 2%), 2020.
  • Merit student pacesetter (top 2%), 2020.
  • Huatai Science and Technology Scholarship, 2021.
  • The First Prize of the 12th and 13th National College Students’ Mathematics Competition, 2020 & 2021.
  • Morality Scholarship sponsored by Zhongying Tang, 2019-2023.