Haowei Lin(林昊苇)


E-mail: linhaowei (at) pku (dot) edu (dot) cn

I am currently embarking on my first year as a Ph.D. student at the Institute for Artificial Intelligence, Peking University, co-advised by Prof. Jianzhu Ma and Prof. Yitao Liang.

I received my Bachelor’s degree in Artificial Intelligence from Yuanpei College, Peking University, where I was fortunate to work with Prof. Bing Liu on OOD detection, continual learning, and NLP.

My primary research focus is in the field of machine learning, with a passionate commitment that includes (but is not limited to!):

  • Understanding foundation models (e.g., LLMs, diffusion models) from the perspectives of pre-training, adaptation, and exploitation.
  • Developing advanced AI systems (e.g., drug design systems, open-world agents) aimed at augmenting human capabilities in real-world scenarios.

If you’d like to chat with me about research or anything, please feel free to reach out via email!


May 3, 2024 I will present our new paper Selecting Large Language Model to Fine-tune via Rectified Scaling Law at ICLR 2024 in ME-FoMo workshop. This paper is selected as an oral presentation and is recently accepted by ICML 2024. See you in Vienna!
Jan 16, 2024 Our paper on continual learning has been accepted at ICLR 2024! We propose a theoretically principled and empirically effective method for CL. Feel free to explore our code and paper. This research was conducted during my undergraduate studies under the guidance of Prof. Bing Liu.
Oct 10, 2023 One short paper on OOD Detection is accepted to EMNLP’23.
Aug 8, 2023 Talk on “Continual Learning of Language Models” at group seminar, Peking University. [slides]
Feb 18, 2023 Our paper studying the continual pre-training of LMs is accepted by ICLR 2023! An extensible PyTorch-based codebase for this task is also launched. (with more than 15 SoTA baselines)


(*: Equal Contribution)

RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Horizon Generation
Zihao Wang, Anji Liu, Haowei Lin, Jiaqi Li, Xiaojian Ma, Yitao Liang
In arxiv.
Selecting Large Language Model to Fine-tune via Rectified Scaling Law
Haowei Lin*, Baizhou Huang*, Haotian Ye*, Qinyue Chen, Zihao Wang, Sujian Li, Jianzhu Ma, Xiaojun Wan, James Zou, Yitao Liang
In ICML 2024 (also an oral presentation in ME-FoMo 2024).
Class Incremental Learning via Likelihood Ratio Based Task Prediction
Haowei Lin, Yijia Shao, Weinan Qian, Ningxin Pan, Yiduo Guo, Bing Liu
In ICLR 2024.
Continual Pre-training of Language Models
Zixuan Ke*, Yijia Shao*, Haowei Lin*, Tatsuya Konishi, Gyuhak Kim, Bing Liu
In ICLR 2023.
MCU: A Task-centric Framework for Open-ended Agent Evaluation in Minecraft
Haowei Lin, Zihao Wang, Jianzhu Ma, Yitao Liang
In ALOE 2023.
JARVIS-1: Open-world Multi-task Agents with Memory-Augmented Multimodal Language Models
Zihao Wang, Shaofei Cai, Anji Liu, Yonggang Jin, Jinbing Hou, Bowei Zhang, Haowei Lin, Zhaofeng He, Zilong Zheng, Yaodong Yang, Xiaojian Ma, Yitao Liang
In arxiv.
FLatS: Principled Out-of-Distribution Detection with Feature-Based Likelihood Ratio Score
Haowei Lin, Yuntian Gu
In EMNLP 2023.
Adapting a Language Model While Preserving its General Knowledge
Zixuan Ke, Yijia Shao, Haowei Lin, Hu Xu, Lei Shu, Bing Liu
In EMNLP 2022.
Continual Training of Language Models for Few-Shot Learning
Zixuan Ke, Haowei Lin, Yijia Shao, Hu Xu, Lei Shu, Bing Liu
In EMNLP 2022.
CMG: A Class-Mixed Generation Approach to Out-of-Distribution Detection
Mengyu Wang*, Yijia Shao*, Haowei Lin, Wenpeng Hu, Bing Liu
In ECML-PKDD 2022.

Selected Awards

  • Outstanding Reviewer, ICML2022.
  • National Scholarship (top 1%), 2022.
  • The First Prize of Peking University Scholarship (top 2%), 2020.
  • Merit student pacesetter (top 2%), 2020.
  • Huatai Science and Technology Scholarship, 2021.
  • The First Prize of the 12th and 13th National College Students’ Mathematics Competition, 2020 & 2021.
  • Morality Scholarship sponsored by Zhongying Tang, 2019-2023.