Skip to content

Conversation

sergiopaniego
Copy link
Member

What does this PR do?

Fixes #2136.

This PR presents a standalone version for adding support to Molmo models. It may benefit from a generalization to be compatible with sft_vlm.py

This notebook has a reproducible version, both running the script or using code directly.

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a GitHub issue? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines.
  • Did you write any new necessary tests?

Who can review?

@lewtun @edbeeching @qgallouedec

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@edbeeching
Copy link
Collaborator

HI @sergiopaniego , thanks for impementing this. Could you run make precommit to format the code so the quality tests pass (you may have to pip install pre-commit)

We are discussing internally how feasible it is to hormonize this script with the other VLM training scripts, I will let you know when we have a conclusion.

@sergiopaniego
Copy link
Member Author

Updated!

Any updates on the harmonization discussion? I’m happy to make any modifications needed! 😊

@mshuffett
Copy link

mshuffett commented Nov 4, 2024

@sergiopaniego so is this working in theory? Also OOM'ing for me needs 50 GB and my A100 only has like 40 GB or something. Is there a level I can pull to decrease the memory? Why does it need so much considering it is doing a LORA?

Is it possible to set this up to train on multiple GPUs?

@sergiopaniego
Copy link
Member Author

sergiopaniego commented Nov 17, 2024

@sergiopaniego so is this working in theory? Also OOM'ing for me needs 50 GB and my A100 only has like 40 GB or something. Is there a level I can pull to decrease the memory? Why does it need so much considering it is doing a LORA?

Is it possible to set this up to train on multiple GPUs?

Sorry for the late response @mshuffett. It still needs some polishing. While testing it, it seems like something is still missing from the artifacts for the model shared. You can see more details about it in the README. For example, since the grad-checkpoint is disabled, memory consumption increases a lot.
It's also not yet merged in the official transformers repo huggingface/transformers#33962

@sergiopaniego
Copy link
Member Author

In case anybody is looking for an updated script, I've some resources 😄 Since the transformers PR is close to being merged, these are the resources:

  • SFT Fine-tuning Colab using the HF converted version of the model thanks to @smellslikeml. I've also generated an updated Colab.
  • Gist for the updated sft_vlm_molmo.py script. The transformer's PR code is currently needed for this to be useful.
  • SFT model showing that the pipeline is working.

@qgallouedec
Copy link
Member

No recent activity on this branch for more than a few months, I'm closing this PR. Please feel free to reopen a PR if there is new activity.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[SFT VLM] Add support for Molmo models
5 participants