Skip to content

[BUG] Custom FinalAnswerTool with multiple inputs raises unexpected keyword argument error #1382

@Lrakotoson

Description

@Lrakotoson

Describe the bug
Within the agent local python interpreter, any final_answer tool with custom inputs, other than answer, will raise an unexpected keyword argument error in some sort of validation but will adress the custom defined inputs after.
When executing outside the agent local python interpreter, it works well.

Code to reproduce the error

from smolagents import FinalAnswerTool, OpenAIServerModel
from smolagents.agents import ToolCallingAgent, CodeAgent
from typing import Optional


class CustomFinalAnswerTool(FinalAnswerTool):
    name = "final_answer"
    description = "Provides a final answer to the given problem."
    # Custom inputs for the final answer tool, including `answer`.
    inputs = {
        "sources": {"type": "string", "description": "The sources given by user", "nullable": True},
        "answer": {"type": "string", "description": "The final answer to the problem", "nullable": True},
        "info": {"type": "string", "description": "Additional information or clarification about the answer", "nullable": True},
    }
    output_type = "object"
    
    def forward(
        self,
        sources: Optional[str] = None,
        answer: Optional[str] = None,
        info: Optional[str] = None
    ) -> dict[str, Optional[str]]:
        """
        Provide a final answer to the given problem.
        """
        # Here we can implement any custom logic for the final answer
        # For now, we just return the answer as is
        return {
            "sources": sources,
            "answer": answer,
            "info": info or "",
        }

query = "These are my sources: ['abc', 'def'], the answer is 'Hello', the info is 'This is a test'."


agent = ToolCallingAgent(
    tools=[CustomFinalAnswerTool()],
    model=OpenAIServerModel(model_id="gpt-4o-mini"),
)
toolcallagent = agent.run(query)
print("[+] ToolCallAgent Result:", toolcallagent)

agent = CodeAgent(
    model=OpenAIServerModel(model_id="gpt-4o-mini"),
    tools=[CustomFinalAnswerTool()],
)
codeagent = agent.run(query)
print("[+] CodeAgent Result:", codeagent)

Error logs (if any)
When running the tool calling agent, I get the following error:

agent = ToolCallingAgent(
    tools=[CustomFinalAnswerTool()],
    model=OpenAIServerModel(model_id="gpt-4o-mini"),
)
toolcallagent = agent.run(query)
print("[+] ToolCallAgent Result:", toolcallagent)

The final answer only returns the answer field, while sources and info are set to None or empty strings.

╭──────────────────────────────────────────────────── New run ────────────────────────────────────────────────────╮
│                                                                                                                 │
│ These are my sources: ['abc', 'def'\], the answer is 'Hello', the info is 'This is a test'.                     │
│                                                                                                                 │
╰─ OpenAIServerModel - gpt-4o-mini ───────────────────────────────────────────────────────────────────────────────╯
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 1 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━       
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Calling tool: 'final_answer' with arguments: {'sources': "['abc', 'def']", 'answer': 'Hello', 'info': 'This is  │
│ a test'}                                                                                                        │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
Final answer: {'sources': None, 'answer': 'Hello', 'info': ''}
[Step 1: Duration 1.21 seconds| Input tokens: 945 | Output tokens: 30]

[+] ToolCallAgent Result: {'sources': None, 'answer': 'Hello', 'info': ''}

When running the code agent, I get the following error:

agent = CodeAgent(
    model=OpenAIServerModel(model_id="gpt-4o-mini"),
    tools=[CustomFinalAnswerTool()],
)
codeagent = agent.run(query)
print("[+] CodeAgent Result:", codeagent)
  • The first step try to call the final_answer function with all parameters, but it fails because the function does not accept sources.
  • The second step tries to call it with answer and info, but it fails again because the function does not accept info.
  • Lastly, it tries to call it with only answer, which succeeds, BUT this time, the final_answer version used rearrange the inputs as defined in the CustomFinalAnswerTool, so it returns sources as the answer and answer and info as None.
╭──────────────────────────────────────────────────── New run ────────────────────────────────────────────────────╮
│                                                                                                                 │
│ These are my sources: ['abc', 'def'\], the answer is 'Hello', the info is 'This is a test'.                     │
│                                                                                                                 │
╰─ OpenAIServerModel - gpt-4o-mini ───────────────────────────────────────────────────────────────────────────────╯
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 1 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Output message of the LLM: ────────────────────────────────────────────────────────────────────────────────────────
Thought: I need to provide a final answer using the `final_answer` tool. The sources, answer, and additional       
information are already provided. I will use these to complete the function call.                                  
                                                                                                                   
Code:                                                                                                              
    ```py                                                                                                              
    final_answer(sources="['abc', 'def']", answer="Hello", info="This is a test")                                      
    ```                                                                                                                
 ─ Executing parsed code: ──────────────────────────────────────────────────────────────────────────────────────── 
  final_answer(sources="['abc', 'def']", answer="Hello", info="This is a test")                                    
 ───────────────────────────────────────────────────────────────────────────────────────────────────────────────── 
Code execution failed at line 'final_answer(sources="['abc', 'def'\]", answer="Hello", info="This is a test")' due 
to: TypeError: evaluate_python_code.<locals>.final_answer() got an unexpected keyword argument 'sources'
[Step 1: Duration 1.33 seconds| Input tokens: 1,986 | Output tokens: 71]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 2 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Output message of the LLM: ────────────────────────────────────────────────────────────────────────────────────────
Thought: It seems that the `final_answer` function does not accept 'sources' as a keyword argument. I will omit    
'sources' from the call and only include 'answer' and 'info'.                                                      
                                                                                                                   
Code:                                                                                                              
    ```py                                                                                                              
    final_answer(answer="Hello", info="This is a test")                                                                
    ```                                                                                                                
 ─ Executing parsed code: ──────────────────────────────────────────────────────────────────────────────────────── 
  final_answer(answer="Hello", info="This is a test")                                                              
 ───────────────────────────────────────────────────────────────────────────────────────────────────────────────── 
Code execution failed at line 'final_answer(answer="Hello", info="This is a test")' due to: TypeError: 
evaluate_python_code.<locals>.final_answer() got an unexpected keyword argument 'info'
[Step 2: Duration 1.30 seconds| Input tokens: 4,196 | Output tokens: 135]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 3 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Output message of the LLM: ────────────────────────────────────────────────────────────────────────────────────────
Thought: Since the previous attempts failed because of incorrect arguments provided to the `final_answer` function,
I will follow the function's definition correctly and only provide allowed parameters. The function likely only    
takes an 'answer' parameter. I will call `final_answer` with just the answer.                                      
                                                                                                                   
Code:                                                                                                              
    ```py                                                                                                              
    final_answer(answer="Hello")                                                                                       
    ```                                                                                                                
 ─ Executing parsed code: ──────────────────────────────────────────────────────────────────────────────────────── 
  final_answer(answer="Hello")                                                                                     
 ───────────────────────────────────────────────────────────────────────────────────────────────────────────────── 
Out - Final answer: {'sources': 'Hello', 'answer': None, 'info': ''}

[+] CodeAgent Result: {'sources': 'Hello', 'answer': None, 'info': ''}

When calling the tool directly, outside the agent's local interpreter, it works as expected:

agent.tools['final_answer'](
    sources="['abc', 'def']",
    answer="Hello",
    info="This is a test"
)
{'sources': "['abc', 'def']", 'answer': 'Hello', 'info': 'This is a test'}

Expected behavior
The CustomFinalAnswerTool should correctly handle all inputs defined in the inputs dictionary and the forward method without raising unexpected keyword argument errors.

A lazy workaround is to keep the answer as unique input with type object and then parse the inputs within the forward method, but this is very unstable as it goes against the intended design of the tool and against the 3rd rule of the system prompt.

3. Always use the right arguments for the tools. DO NOT pass the arguments as a dict as in 'answer = wiki({'query': "What is the place where James Bond lives?"})', but use the arguments directly as in 'answer = wiki(query="What is the place where James Bond lives?")'. 

Packages version:

smolagents==1.16.1

Additional context
I tested only with local interpreter.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions