I'm a third-year Ph.D. student in the Department of Systems Engineering and Engineering Management at The Chinese University of Hong Kong (CUHK), advised by Prof. Kam-Fai Wong.
My research focuses on post-training techniques for large language models (LLMs), aiming to make LLMs more reliable, robust, and safe. Recent Projects include:
- VAA (ICML 2025): A safety alignment method that identifies and reinforces vulnerable examples to improve safety retention.
- PEARL (ICLR 2025): An instruction tuning method for improving LLM robustness in in-context learning (ICL) and retrieval-augmented generation (RAG).
- WatME (ACL 2024): A decoding-time watermarking method that leverages lexical redundancy for lossless embedding.
- CONNER (EMNLP 2023): An evaluation framework for assessing LLMs as generative search engines.
I'm currently exploring reinforcement learning (RL) and large reasoning models (LRMs).
Email: lchen@se.cuhk.edu.hk
Homepage: https://chanliang.github.io