Haowei Lin

E-mail: linhaowei (at) pku (dot) edu (dot) cn
I am Haowei Lin(林昊苇), a third year Ph.D. student at the Institute for Artificial Intelligence, Peking University, co-advised by Prof. Yitao Liang and Prof. Jianzhu Ma.
I received my Bachelor’s degree in Artificial Intelligence from Yuanpei College, Peking University, where I was fortunate to work with Prof. Bing Liu on OOD detection, continual learning and NLP. We are the first to propose the task of continual pre-training for LLMs (EMNLP22, ICLR23), and the first to apply OOD detection methods to continual learning (EMNLP23, ICLR24).
I am passionate about designing next-generation digital AI that deeply integrates into the real world. Currently, I focus on the foundations and applications of Generative Foundation Models (GFMs):
- Unification for Multimodality. I aim to create useful experiences that apply to both diffusion (flow-matching included) and autoregressive architectures, and across modalities including langauge, video, MDP, 3D and molecules. For foundation models, I also focus on their scaling law and training-free guidance.
- AI for Scientific Discovery. I am interested in applying GFMs to create superhuman intelligence for complex reasoning, open-world gaming, and discover new laws in space physics or neural network scaling.
I am a member of Team CraftJarvis, which is dedicated to creating generalist agents for open-world environments. Outside of my professional interests, I enjoy engaging in music-related activities, including singing, playing the guitar, and participating in choirs.
If you’re interested in working with me on GFMs / AI Scientist, please contact me through e-mail.
news
May 21, 2025 | I’m contributing to the open-source project OpenEvolve, a community implementation of AlphaEvolve—a scientific discovery agent from DeepMind designed to develop better algorithms for open problems. Check out its performance on Symbolic Regression benchmarks! |
---|---|
Dec 01, 2024 | Talk on “Unified Training-Free Guidance for Diffusion Models” at NeurIPS 2024 paper sharing session, 机器之心. [video] |
Sep 26, 2024 | TFG and OmniJARVIS have been accepted at NeurIPS 2024! In TFG, we present a unified training-free guidance method for diffusion models, evaluated across 16 tasks spanning image, audio, and geometry domains. OmniJARVIS, developed in collaboration with Team CraftJARVIS, is an end-to-end VLA (Vision-Language-Action) agent for open-world Minecraft. |
May 03, 2024 | I will present our new paper Selecting Large Language Model to Fine-tune via Rectified Scaling Law at ICLR 2024 in ME-FoMo workshop. This paper is selected as an oral presentation and is recently accepted by ICML 2024. See you in Vienna! |
Jan 16, 2024 | Our paper on continual learning has been accepted at ICLR 2024! We propose a theoretically principled and empirically effective method for CL. Feel free to explore our code and paper. This research was conducted during my undergraduate studies under the guidance of Prof. Bing Liu. |