-
Notifications
You must be signed in to change notification settings - Fork 13.1k
Closed
Labels
bugSomething isn't workingSomething isn't working
Description
What is the issue?
When using the new google ADK with ollama_chat and litellm (1.65.8 & 1.66.0) in this script:
import datetime
from zoneinfo import ZoneInfo
from google.adk.agents import Agent
from google.adk.models.lite_llm import LiteLlm
def get_weather(city: str) -> dict:
"""Retrieves the current weather report for a specified city.
Args:
city (str): The name of the city for which to retrieve the weather report.
Returns:
dict: status and result or error msg.
"""
if city.lower() == "new york":
return {
"status": "success",
"report": (
"The weather in New York is sunny with a temperature of 25 degrees"
" Celsius (41 degrees Fahrenheit)."
),
}
else:
return {
"status": "error",
"error_message": f"Weather information for '{city}' is not available.",
}
def get_current_time(city: str) -> dict:
"""Returns the current time in a specified city.
Args:
city (str): The name of the city for which to retrieve the current time.
Returns:
dict: status and result or error msg.
"""
if city.lower() == "new york":
tz_identifier = "America/New_York"
else:
return {
"status": "error",
"error_message": (
f"Sorry, I don't have timezone information for {city}."
),
}
tz = ZoneInfo(tz_identifier)
now = datetime.datetime.now(tz)
report = (
f'The current time in {city} is {now.strftime("%Y-%m-%d %H:%M:%S %Z%z")}'
)
return {"status": "success", "report": report}
root_agent = Agent(
name="weather_time_agent",
model=LiteLlm(
model='ollama_chat/mistral-small3.1'
),
description=(
"Agent to answer questions about the time and weather in a city."
),
instruction=(
"I can answer your questions about the time and weather in a city."
),
tools=[get_weather, get_current_time],
)
I get the following error:
Relevant log output
LLM Request:
-----------------------------------------------------------
System Instruction:
I can answer your questions about the time and weather in a city.
You are an agent. Your internal name is "weather_time_agent".
The description about you is "Agent to answer questions about the time and weather in a city."
-----------------------------------------------------------
Contents:
{"parts":[{"text":"Can you tell me the time?"}],"role":"user"}
{"parts":[{"text":"In which city?"}],"role":"model"}
{"parts":[{"text":"New york"}],"role":"user"}
{"parts":[{"function_call":{"id":"95ce73b8-91cd-4f3f-a450-afbd6acf6dae","args":{"city":"New york"},"name":"get_current_time"}}],"role":"model"}
{"parts":[{"function_response":{"id":"95ce73b8-91cd-4f3f-a450-afbd6acf6dae","name":"get_current_time","response":{"status":"success","report":"The current time in New york is 2025-04-13 02:07:14 EDT-0400"}}}],"role":"user"}
-----------------------------------------------------------
Functions:
get_weather: {'city': {'type': <Type.STRING: 'STRING'>}} -> None
get_current_time: {'city': {'type': <Type.STRING: 'STRING'>}} -> None
-----------------------------------------------------------
08:07:14 - LiteLLM:INFO: utils.py:3085 -
LiteLLM completion() model= mistral-small3.1; provider = ollama_chat
2025-04-13 08:07:14,271 - INFO - utils.py:3085 -
LiteLLM completion() model= mistral-small3.1; provider = ollama_chat
2025-04-13 08:07:14,297 - INFO - _client.py:1025 - HTTP Request: POST http://localhost:11434/api/show "HTTP/1.1 200 OK"
08:07:14 - LiteLLM:INFO: cost_calculator.py:636 - selected model name for cost calculation: ollama_chat/mistral-small3.1
2025-04-13 08:07:14,297 - INFO - cost_calculator.py:636 - selected model name for cost calculation: ollama_chat/mistral-small3.1
2025-04-13 08:07:14,324 - INFO - _client.py:1025 - HTTP Request: POST http://localhost:11434/api/show "HTTP/1.1 200 OK"
2025-04-13 08:07:14,349 - INFO - _client.py:1025 - HTTP Request: POST http://localhost:11434/api/show "HTTP/1.1 200 OK"
2025-04-13 08:07:16,363 - INFO - _client.py:1025 - HTTP Request: POST http://localhost:11434/api/show "HTTP/1.1 200 OK"
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm._turn_on_debug()'.
2025-04-13 08:07:16,629 - ERROR - fast_api.py:616 - Error in event_generator: litellm.APIConnectionError: Ollama_chatException - {"error":"json: cannot unmarshal array into Go struct field ChatRequest.messages.content of type string"}
Traceback (most recent call last):
File "C:\Users\benni\Documents\nosana\demos\agent-sandbox\venv\Lib\site-packages\litellm\main.py", line 477, in acompletion
response = await init_response
^^^^^^^^^^^^^^^^^^^
File "C:\Users\benni\Documents\nosana\demos\agent-sandbox\venv\Lib\site-packages\litellm\llms\ollama_chat.py", line 607, in ollama_acompletion
raise e # don't use verbose_logger.exception, if exception is raised
^^^^^^^
File "C:\Users\benni\Documents\nosana\demos\agent-sandbox\venv\Lib\site-packages\litellm\llms\ollama_chat.py", line 546, in ollama_acompletion
raise OllamaError(status_code=resp.status, message=text)
litellm.llms.ollama_chat.OllamaError: {"error":"json: cannot unmarshal array into Go struct field ChatRequest.messages.content of type string"}
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\benni\Documents\nosana\demos\agent-sandbox\venv\Lib\site-packages\google\adk\cli\fast_api.py", line 605, in event_generator
async for event in runner.run_async(
File "C:\Users\benni\Documents\nosana\demos\agent-sandbox\venv\Lib\site-packages\google\adk\runners.py", line 197, in run_async
async for event in invocation_context.agent.run_async(invocation_context):
File "C:\Users\benni\Documents\nosana\demos\agent-sandbox\venv\Lib\site-packages\google\adk\agents\base_agent.py", line 141, in run_async
async for event in self._run_async_impl(ctx):
File "C:\Users\benni\Documents\nosana\demos\agent-sandbox\venv\Lib\site-packages\google\adk\agents\llm_agent.py", line 232, in _run_async_impl
async for event in self._llm_flow.run_async(ctx):
File "C:\Users\benni\Documents\nosana\demos\agent-sandbox\venv\Lib\site-packages\google\adk\flows\llm_flows\base_llm_flow.py", line 231, in run_async
async for event in self._run_one_step_async(invocation_context):
File "C:\Users\benni\Documents\nosana\demos\agent-sandbox\venv\Lib\site-packages\google\adk\flows\llm_flows\base_llm_flow.py", line 257, in _run_one_step_async
async for llm_response in self._call_llm_async(
File "C:\Users\benni\Documents\nosana\demos\agent-sandbox\venv\Lib\site-packages\google\adk\flows\llm_flows\base_llm_flow.py", line 470, in _call_llm_async
async for llm_response in llm.generate_content_async(
File "C:\Users\benni\Documents\nosana\demos\agent-sandbox\venv\Lib\site-packages\google\adk\models\lite_llm.py", line 658, in generate_content_async
response = await self.llm_client.acompletion(**completion_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\benni\Documents\nosana\demos\agent-sandbox\venv\Lib\site-packages\google\adk\models\lite_llm.py", line 88, in acompletion
return await acompletion(
^^^^^^^^^^^^^^^^^^
File "C:\Users\benni\Documents\nosana\demos\agent-sandbox\venv\Lib\site-packages\litellm\utils.py", line 1452, in wrapper_async
raise e
File "C:\Users\benni\Documents\nosana\demos\agent-sandbox\venv\Lib\site-packages\litellm\utils.py", line 1313, in wrapper_async
result = await original_function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\benni\Documents\nosana\demos\agent-sandbox\venv\Lib\site-packages\litellm\main.py", line 496, in acompletion
raise exception_type(
^^^^^^^^^^^^^^^
File "C:\Users\benni\Documents\nosana\demos\agent-sandbox\venv\Lib\site-packages\litellm\litellm_core_utils\exception_mapping_utils.py", line 2214, in exception_type
raise e
File "C:\Users\benni\Documents\nosana\demos\agent-sandbox\venv\Lib\site-packages\litellm\litellm_core_utils\exception_mapping_utils.py", line 2183, in exception_type
raise APIConnectionError(
litellm.exceptions.APIConnectionError: litellm.APIConnectionError: Ollama_chatException - {"error":"json: cannot unmarshal array into Go struct field ChatRequest.messages.content of type string"}
OS
Windows
GPU
Nvidia
CPU
AMD
Ollama version
0.6.5
vj68 and SergiiShcherbak
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working