Skip to content

Conversation

HennerM
Copy link
Contributor

@HennerM HennerM commented Jan 20, 2024

I am trying the llm-vscode extension with llm-ls on a locally hosted endpoint (running a custom fine-tuned model), however the extension still gives a warning that I might get rate limited by HuggingFace.

Since inference doesn't run on a HuggingFace server this warning is not necessary.

Copy link
Member

@McPatate McPatate left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we know have the adaptor setting, I'd rather check its value than doing so w/ the URL. Wdyt?

@HennerM
Copy link
Contributor Author

HennerM commented Feb 4, 2024

Since we know have the adaptor setting, I'd rather check its value than doing so w/ the URL. Wdyt?

Yes good idea, I changed this to just check the adopter value now

Copy link
Member

@McPatate McPatate left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Small nits on naming, otherwise we should be good to go!

Co-authored-by: Luc Georges <McPatate@users.noreply.github.com>
@McPatate McPatate merged commit 1499fd6 into huggingface:main Feb 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants