Skip to content

Support configurable tool_choice in prepare_completion_kwargs #1392

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Jun 2, 2025

Conversation

albertvillanova
Copy link
Member

@albertvillanova albertvillanova commented May 27, 2025

Support configurable tool_choice in _prepare_completion_kwargs.

Note that as reported in the associated issue, some providers do not support tool_choice="required".

Fix #1391.

@aymeric-roucher
Copy link
Collaborator

@albertvillanova we already had a discussion on this here: #526

Because passing no tool does not work with smolagents when we're trying to call tools. But maybe we could allow tool_choice="auto", and log an error if the LLM chooses to pass no tool?

@albertvillanova
Copy link
Member Author

albertvillanova commented May 30, 2025

@aymeric-roucher, thanks for the ping to the old PR: I had missed that one!

According to my understanding, for ToolCallingAgent, there's already a fallback mechanism for handling when chat_message.tool_calls is None. Specifically, we invoke self.model.parse_tool_calls(chat_message), which attempts to extract tool calls from chat_message.content.

if chat_message.tool_calls is None or len(chat_message.tool_calls) == 0:
try:
chat_message = self.model.parse_tool_calls(chat_message)
except Exception as e:
raise AgentParsingError(f"Error while parsing tool call from model output: {e}", self.logger)

  • For example, even when I explicitly forced tool_choice="none", the default InferenceClient model was able to parse: chat_message.content = 'Action:\n{\n "name": "final_answer",\n "arguments": {"answer": "120"}\n}'.

Also note that if parsing fails and no tool calls are found, we already raise an error via:

assert len(message.tool_calls) > 0, "No tool call was found in the model output"

Therefore, I think we are already covered in such cases through explicit parsing, checks and errors.

tools_config = {
"tools": [get_tool_json_schema(tool) for tool in tools_to_call_from],
}
if tool_choice is not None:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this parameter always be a string, and never None?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This adds this possible use case: a user may pass tool_choice=None when they don't want smolagents to pass any tool_choice value to the model and rely on its default value (whichever it is).

@albertvillanova albertvillanova merged commit 38d7b96 into huggingface:main Jun 2, 2025
3 checks passed
@albertvillanova albertvillanova deleted the fix-1391 branch June 2, 2025 13:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[BUG] When model API provider doesn't support tool_choice="required", OpenAIServerModel breaks at requesting response
2 participants