-
Notifications
You must be signed in to change notification settings - Fork 2.1k
[Liger] liger DPO support #2568
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
liger loss isn't compatible with ref precomputing right? If so we could add a warning or an error. |
This PR needs to use _FSDPForwardRedirection or another solution to work with FSDP correctly |
@VProv, at the moment, I was having issues getting the same outputs/metrics with and without liger in the trainer. |
What setup are you using? |
Hi, I am working on fixing the output/metrics issue. |
@kashif @qgallouedec can you please review the following PR which fixes the output/metrics issue? Thanks :) |
Hi, thanks for sharing your work! Can I use your code with DeepSpeed Zero 3? I tried running it with that setup, but it doesn't seem to be working. I think it's related to parameter partitioning based on my analysis of the error log.
|
Continuing my analysis, I can confirm that it's definitely connected to DeepSpeed zero 3. When I switched to stage 2, it ran smoothly without any issues. |
thanks @hanbyul-kim for the report |
@kashif just wanted to circle back and see if we can merge this now? We wanted to try it out internally at Linkedin. |
trl/trainer/dpo_trainer.py
Outdated
|
||
if is_wandb_available(): | ||
import wandb | ||
|
||
|
||
def shift_tokens_right(input_ids: torch.Tensor, pad_token_id: int, decoder_start_token_id: int) -> torch.Tensor: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pad_token_id isn't used?
|
||
if is_wandb_available(): | ||
import wandb | ||
|
||
|
||
def shift_tokens_right(input_ids: torch.Tensor, decoder_start_token_id: int) -> torch.Tensor: | ||
"""Shift input ids one token to the right, and pad with pad_token_id""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this docstring ain't accurate I think
What does this PR do?
Add support for Liger-kernel losses for the DPO Kernel
Needs: linkedin/Liger-Kernel#521
Peft support: #3065