Skip to content

Conversation

PeterDaveHello
Copy link
Contributor

@PeterDaveHello PeterDaveHello commented Apr 16, 2025

User description

Reference:


PR Type

Enhancement


Description

  • Added support for OpenAI o3 and o4-mini reasoning models.

  • Updated token limits for new models in pr_agent/algo/__init__.py.

  • Extended model lists for reasoning and temperature support categories.


Changes walkthrough 📝

Relevant files
Enhancement
__init__.py
Added OpenAI o3 and o4-mini models with configurations     

pr_agent/algo/init.py

  • Added o3 and o4-mini models with token limits.
  • Included new models in NO_SUPPORT_TEMPERATURE_MODELS list.
  • Included new models in SUPPORT_REASONING_EFFORT_MODELS list.
  • +14/-2   

    Need help?
  • Type /help how to ... in the comments thread for any questions about Qodo Merge usage.
  • Check out the documentation for more information.
  • Copy link
    Contributor

    PR Reviewer Guide 🔍

    Here are some key observations to aid the review process:

    ⏱️ Estimated effort to review: 1 🔵⚪⚪⚪⚪
    🧪 No relevant tests
    🔒 No security concerns identified
    ⚡ Recommended focus areas for review

    Token Limit Inconsistency

    The token limits for o3 and o4-mini models are set to 200000, while other models use 204800 for 200K tokens. This inconsistency should be verified against OpenAI's documentation.

    'o3': 200000,  # 200K, but may be limited by config.max_model_tokens
    'o3-2025-04-16': 200000,  # 200K, but may be limited by config.max_model_tokens
    'o4-mini': 200000, # 200K, but may be limited by config.max_model_tokens
    'o4-mini-2025-04-16': 200000, # 200K, but may be limited by config.max_model_tokens

    @PeterDaveHello
    Copy link
    Contributor Author

    cc @mrT23

    Copy link
    Contributor

    PR Code Suggestions ✨

    Explore these optional code suggestions:

    CategorySuggestion                                                                                                                                    Impact
    Possible issue
    Maintain consistent token limits

    Ensure consistency in token limit values. The existing o3-mini and other models
    use 204800 for 200K tokens, but the new models use 200000. This inconsistency
    could cause issues with token calculations.

    pr_agent/algo/init.py [39-42]

    -'o3': 200000,  # 200K, but may be limited by config.max_model_tokens
    -'o3-2025-04-16': 200000,  # 200K, but may be limited by config.max_model_tokens
    -'o4-mini': 200000, # 200K, but may be limited by config.max_model_tokens
    -'o4-mini-2025-04-16': 200000, # 200K, but may be limited by config.max_model_tokens
    +'o3': 204800,  # 200K, but may be limited by config.max_model_tokens
    +'o3-2025-04-16': 204800,  # 200K, but may be limited by config.max_model_tokens
    +'o4-mini': 204800,  # 200K, but may be limited by config.max_model_tokens
    +'o4-mini-2025-04-16': 204800,  # 200K, but may be limited by config.max_model_tokens
    • Apply this suggestion
    Suggestion importance[1-10]: 8

    __

    Why: The suggestion correctly identifies an inconsistency in token limit values where existing models use 204800 for 200K tokens while the newly added models use 200000. This inconsistency could lead to subtle bugs in token calculations and maintaining consistency is important for reliable behavior.

    Medium
    • More
    • Author self-review: I have reviewed the PR code suggestions, and addressed the relevant ones.

    @mrT23 mrT23 merged commit 696cdc3 into qodo-ai:main Apr 17, 2025
    2 checks passed
    @PeterDaveHello PeterDaveHello changed the title Add OpenAI o3 & 4o-mini reasoning models Add OpenAI o3 & o4-mini reasoning models Apr 17, 2025
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Projects
    None yet
    Development

    Successfully merging this pull request may close these issues.

    2 participants