Skip to content

Set tool_choice to "auto" #526

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from

Conversation

taha-yassine
Copy link
Contributor

Currently, tool_choice is set to "required".
According to OpenAI specs:

Auto: (Default) Call zero, one, or multiple functions.
Required: Call one or more functions.

This PR sets tool_choice to "auto" instead. It is interesting for 2 reasons:

  1. It is the default, and intuitively it makes sense to give the model the ability to choose when to call a tool
  2. Currently, vLLM doesn't support tool_choice="required" [Feature]: Additional possible value for tool_choice: required vllm-project/vllm#10526

Looking at this repo's history, I see that at one point both options coexisted:

tool_choice="auto",

I couldn't find any discussion on why it was ultimately decided to go with "required" instead of "auto".

Moreover, since tool_choice is explicitly set in _prepare_completion_kwargs(), it has priority over self.kwargs and can't be overridden by passing it to Model.

Alternatives

  • Make tool_choice an explicit positional argument of Model, allowing the user to set it at initialization
  • Remove the explicit declaration of tool_choice in _prepare_completion_kwargs(), letting the LLM API engine decide on the default to use (most likely "auto" if it follows OpenAI specs) and allowing the user to override it through self.kwargs
  • Change the logic of _prepare_completion_kwargs() to allow tool_choice in particular to be overridden (this would be the hardest to maintain imo)

@aymeric-roucher
Copy link
Collaborator

Hello @taha-yassine ! Here switching to auto would introduce more of a problem than it fixes. smolagents required one tool call at every step: this exactly matches the argument required. Allowing for not having a tool call each time would break the lilbrary's logic. As such, inference solutions that do not support required argument, i.e. libraries that do not allow forcing a tool call, are simply incompatible with the logic of smolagents.

So let's close this: the only solution for your problem will be to modify the Model subclass itself to accept forcing a tool call!

@taha-yassine
Copy link
Contributor Author

Thanks for your answer. I see why auto could cause problems.
Do you think it would be worth it to refactor the Model class to use required by default but allow the user to override it?

@aymeric-roucher
Copy link
Collaborator

@taha-yassine : overriding this argument to auto will not fix the issue either, because that will introduce the behviour where no tool is passed, which breaks the fundamental logic of this library. Thus it's better not to introduce the possibility to override.

@taha-yassine
Copy link
Contributor Author

I see. Looking closer at the code, the prompt suggests the agent use final_answer if it decides no tool needs to be called, which seems like a reasonable alternative. Combined with the ability to enable planning, I think it's good enough of a solution.
It's unfortunate that vLLM doesn't allow tool_choice="required". I will switch to another backend until it is implemented.
Thanks for you time!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants