-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
feat: Add Azure AI Foundry provider for consciousness #272
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
✅ Deploy Preview for airi-vtuber ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
✅ Deploy Preview for airi-docs ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @Inokinoki, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request introduces support for Azure OpenAI as a new chat provider. To accommodate the asynchronous nature of initializing some providers, the core getProviderInstance
utility has been refactored to be an async
function, and all its call sites across the application have been updated to await
its result. This change also includes the addition of a dedicated settings page for configuring Azure OpenAI.
Highlights
- New Provider Integration: A new
azure-openai
provider has been integrated, allowing the application to connect to Azure OpenAI services for chat functionalities. This includes defining its metadata, configuration options (API key, resource name, and API version), and model listing capabilities. - Asynchronous Provider Initialization: The
getProviderInstance
function within theproviders
store has been refactored to be asynchronous. This change enables more flexible and potentially network-bound provider initialization logic. - Widespread
await
Adoption: All existing call sites forprovidersStore.getProviderInstance
across various components and pages (e.g.,InteractiveArea.vue
,speech.vue
,alibaba-cloud-model-studio.vue
,elevenlabs.vue
,microsoft-speech.vue
,player2-speech.vue
,volcengine.vue
,SpeechProviderSettings.vue
,Stage.vue
) have been updated to correctlyawait
the now asynchronous function. - Provider Creation Refactor: The
createProvider
methods for all existing providers within theproviders
store have been updated to return aPromise
, aligning with the new asynchronous provider initialization pattern. - New UI for Azure OpenAI Settings: A new Vue component (
azure-openai.vue
) has been added underapps/stage-web/src/pages/settings/providers/
to provide a user interface for configuring Azure OpenAI specific settings like API Key, Resource Name, and API Version.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request refactors the provider instantiation to be asynchronous to support providers like Azure OpenAI. The changes are applied consistently across the codebase, and a new settings page for Azure OpenAI is added.
My review focuses on the new Azure OpenAI provider implementation. I've identified areas for improvement in the new Vue component (azure-openai.vue
), including a typo and improving a description. In the providers store, I've suggested a refactoring to reduce code duplication for the new Azure provider and pointed out missing model metadata that affects user experience.
Overall, the core change to make getProviderInstance
async is well-implemented. The feedback provided aims to improve the quality and maintainability of the new provider integration.
onMounted(() => { | ||
// Initialize provider if it doesn't exist | ||
if (!providers.value[providerId]) { | ||
providers.value[providerId] = {} | ||
} | ||
|
||
// Initialize refs with current values | ||
apiKey.value = providers.value[providerId]?.apiKey || '' | ||
resourceName.value = providers.value[providerId]?.resourceName || '' | ||
apiVersion.value = providers.value[providerId]?.apiVersion || '' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The onMounted
hook re-initializes computed properties (apiKey
, resourceName
, apiVersion
), which is unnecessary because they are already reactively linked to the store. The provider object initialization is good, but can be done more cleanly by calling providersStore.initializeProvider(providerId)
.
onMounted(() => {
providersStore.initializeProvider(providerId)
})
set: (value) => { | ||
if (!providers.value[providerId]) | ||
providers.value[providerId] = {} | ||
|
||
providers.value[providerId].apiKey = value | ||
}, | ||
}) | ||
|
||
const resourceName = computed({ | ||
get: () => providers.value[providerId]?.resourceName || '', | ||
set: (value) => { | ||
if (!providers.value[providerId]) | ||
providers.value[providerId] = {} | ||
|
||
providers.value[providerId].resourceName = value | ||
}, | ||
}) | ||
|
||
const apiVersion = computed({ | ||
get: () => providers.value[providerId]?.apiVersion || '', | ||
set: (value) => { | ||
if (!providers.value[providerId]) | ||
providers.value[providerId] = {} | ||
|
||
providers.value[providerId].apiVersion = value | ||
}, | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/> | ||
<ProviderAccountIdInput | ||
v-model="resourceName" | ||
label="Resouce name" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
v-model="resourceName" | ||
label="Resouce name" | ||
placeholder="..." | ||
description="Prefix used in https://<prefix>.services.ai.azure.com" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
createProvider: async (config) => { | ||
return await createAzure({ | ||
apiKey: (config.apiKey as string).trim(), | ||
resourceName: config.resourceName as string, | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logic for creating the Azure client is duplicated in createProvider
and capabilities.listModels
. This repetition can be avoided by extracting the client creation logic into a helper function within an Immediately Invoked Function Expression (IIFE).
'azure-openai': (() => {
const createClient = (config: Record<string, unknown>) => {
return createAzure({
apiKey: (config.apiKey as string).trim(),
resourceName: config.resourceName as string,
})
}
return {
id: 'azure-openai',
category: 'chat',
tasks: ['text-generation'],
nameKey: 'settings.pages.providers.provider.azure_openai.title',
name: 'Azure OpenAI',
descriptionKey: 'settings.pages.providers.provider.azure_openai.description',
description: 'oai.azure.com',
icon: 'i-lobe-icons:microsoft',
defaultOptions: () => ({}),
createProvider: async (config) => {
return createClient(config)
},
capabilities: {
listModels: async (config) => {
const client = await createClient(config)
return (await listModels({
...client.model(),
})).map((model) => {
return {
id: model.id,
name: model.id,
provider: 'azure-openai',
description: '',
contextLength: 0,
deprecated: false,
} satisfies ModelInfo
})
},
},
validators: {
validateProviderConfig: (config) => {
return !!config.apiKey && !!config.resourceName
},
},
}
})(),
name: model.id, | ||
provider: 'azure-openai', | ||
description: '', | ||
contextLength: 0, | ||
deprecated: false, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The model information for Azure OpenAI is missing description
and contextLength
, which are hardcoded to empty or zero values. If the underlying API for listing models provides this information, it should be mapped here. If not, consider adding a // TODO
comment to acknowledge this limitation and track it for future improvement.
Welcome to Moeru AI and Project AIRI! Please ask me for any assistant or help. I am happy to help. |
fc2d46c
to
bf2a3d6
Compare
bf2a3d6
to
a81f4d6
Compare
It's ok. Let me fix it, let's merge first. Thanks for contributing. 🎉 |
Description
Refactor
getProviderInstance
with async to get Azure provider for chat.There is still an issue with xsai on the API endpoint:
https://${options.resourceName}.openai.azure.com/
as wellIn Vercel AI, they also allow to setup
baseURL
: https://ai-sdk.dev/providers/ai-sdk-providers/azureWill test and check whether this will help.Tested, it seems that we can still leverage Azure AI Foundry using the same baseURL, renamed everything.
But there is still a related issue here. A workaround is included in this PR
Linked Issues
Additional context