Implement more of the vercel openai sdk bits to allow for api compatible servers #1168
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This should let any other compatible AI servers, proxies, or routers work mostly natively.
I've added the needed variables to the
.env.sample
file along with adding the organization and project fields to the setup so that it can be used for tracking/accounting later on if users want to do that.I understand that LW is open source but also a paid service, not sure what you need from me as far as licensing goes to include this so I'll leave the following until I'm told what you need otherwise:
Incidentally I'm about to test this myself with Qwen3-4B locally after it finishes building. Let me know if there's any changes you'd like me to make since i'm only mildly familiar with typescript so I might not have done something all that idiomatically