-
Notifications
You must be signed in to change notification settings - Fork 6.9k
openai model in hugging face correct maxtokens #5371
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. Weβll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
π¦ Changeset detectedLatest commit: cf6428e The changes in this PR will be included in the next version bump. This PR includes changesets to release 1 package
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds configuration for two OpenAI open-weight reasoning models in the Hugging Face models registry, specifically "openai/gpt-oss-120b" and "openai/gpt-oss-20b". Both models are configured with identical token limits and capabilities.
- Adds two new OpenAI model configurations to the huggingFaceModels object
- Sets both models with 32,767 token limits for max tokens and context window
- Configures models without image support or prompt caching capabilities
Comments suppressed due to low confidence (1)
src/shared/api.ts:1137
- The models 'openai/gpt-oss-120b' and 'openai/gpt-oss-20b' do not appear to be valid OpenAI models available on Hugging Face. These model identifiers are not recognized as existing OpenAI models. Please verify the correct model names and ensure they exist on the Hugging Face platform.
maxTokens: 32767,
Coverage ReportExtension CoverageBase branch: 47% PR branch: 48% β Coverage increased or remained the same Webview CoverageBase branch: 17% PR branch: 17% β Coverage increased or remained the same Overall Assessmentβ Test coverage has been maintained or improved Last updated: 2025-08-05T18:12:24.116350 |
NOTE: In my testing this DOES NOT WORK well and the model for 120B kinda doesn't work at all but the hope is that the system card will be respected eventually. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Verified the model IDs matches https://console.groq.com/docs/models#preview-models
Related Issue
Issue: #XXXX
Description
Test Procedure
Type of Change
Pre-flight Checklist
npm test
) and code is formatted and linted (npm run format && npm run lint
)npm run changeset
(required for user-facing changes)Screenshots
Additional Notes
Important
Add
openai/gpt-oss-120b
andopenai/gpt-oss-20b
models to Hugging Face and Groq listings inapi.ts
, with updated attributes and pricing.openai/gpt-oss-120b
andopenai/gpt-oss-20b
tohuggingFaceModels
andgroqModels
inapi.ts
.groqDefaultModelId
toopenai/gpt-oss-120b
.maxTokens
set to 32766 andcontextWindow
to 131072 for both models.gpt-oss-120b
and 0.1/0.5 forgpt-oss-20b
in Groq.strong-zoos-arrive.md
documenting new OSS OpenAI models in Hugging Face and Groq.This description was created by
for cf6428e. You can customize this summary. It will automatically update as commits are pushed.