-
Notifications
You must be signed in to change notification settings - Fork 1k
Closed
Labels
Description
Git provider (optional)
None
System Info (optional)
No response
Issues details
It seems that the Gemini provider for LiteLLM should be gemini, but the documentation currently shows google_ai_studio. If that’s correct, would it be okay for me to submit a PR to update the docs?
Also, since models not included in MAX_TOKENS (defined in pr_agent/algo/__init__.py
) don’t work properly, I was wondering if it would be alright to add an warning log to make this behavior clearer.
- As-is
Error generating PR description {pr_url}: Failed to generate prediction with any model ofFailed to generate prediction with any model of [{model}, {fallback_models}]
- To-be
+ (Warning level) generating PR description {...}: Ensure {model} is defined in MAX_TOKENS in ./pr_agent/algo/__init__.py or set a positive value for it in config.custom_model_max_tokens
Error generating PR description {...}: Failed to generate prediction with any model ofFailed to generate prediction with any model of [...]
- code
# pr_agent/algo/__init__.py
MAX_TOKENS = {
...
'gemini/gemini-1.5-pro': 1048576,
'gemini/gemini-1.5-flash': 1048576,
'gemini/gemini-2.0-flash': 1048576,
'gemini/gemini-2.5-pro-preview-03-25': 1048576,
...
}