Skip to content
View ChanLiang's full-sized avatar

Highlights

  • Pro

Block or report ChanLiang

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
ChanLiang/README.md

Hi, I'm Liang Chen

I'm a third-year Ph.D. student in the Department of Systems Engineering and Engineering Management at The Chinese University of Hong Kong (CUHK), advised by Prof. Kam-Fai Wong.

My research focuses on post-training techniques for large language models (LLMs), aiming to make LLMs more reliable, robust, and safe. Recent Projects include:

  • VAA (ICML 2025): A safety alignment method that identifies and reinforces vulnerable examples to improve safety retention.
  • PEARL (ICLR 2025): An instruction tuning method for improving LLM robustness in in-context learning (ICL) and retrieval-augmented generation (RAG).
  • WatME (ACL 2024): A decoding-time watermarking method that leverages lexical redundancy for lossless embedding.
  • CONNER (EMNLP 2023): An evaluation framework for assessing LLMs as generative search engines.

I'm currently exploring reinforcement learning (RL) and large reasoning models (LRMs).

Contact

Email: lchen@se.cuhk.edu.hk
Homepage: https://chanliang.github.io

Pinned Loading

  1. VAA VAA Public

    [ICML 2025] Vulnerability-Aware Alignment: Mitigating Uneven Forgetting in Harmful Fine-Tuning

    Python 12

  2. PEARL PEARL Public

    [ICLR 2025] PEARL: Towards Permutation-Resilient LLMs

    Python 9

  3. WatME WatME Public

    [ACL 2024] WatME: Towards Lossless Watermarking Through Lexical Redundancy

    Python 9

  4. CONNER CONNER Public

    [EMNLP 2023] Beyond Factuality: A Comprehensive Evaluation of Large Language Models as Knowledge Generators

    Python 32 2

  5. ORIG ORIG Public

    [ACL 2023 findings] Towards Robust Personalized Dialogue Generation via Order-Insensitive Representation Regularization

    Python 17

  6. MLGroupJLU/LLM-eval-survey MLGroupJLU/LLM-eval-survey Public

    The official GitHub page for the survey paper "A Survey on Evaluation of Large Language Models".

    1.6k 99