Skip to content

Conversation

jfaltermeier
Copy link
Contributor

@jfaltermeier jfaltermeier commented Jun 4, 2025

What it does

Adds a sticky option for tools.
Based on this option the results of previous tool calls will be sent to the LLM or not.
Options are to send nothing, the arguments only, the result only, or both the args and results.

The default is args only, which is the same behaviour as before.
The history view was misleading there giving the impression that both arguments and contents were sent to the LLM.

With this option it should be possible to prevent repeated toolcalls, because the file contents from a file may already be part of the message history. We have to identify places where it makes sense to change this from args to both

How to test

Follow-ups

Breaking changes

  • This PR introduces breaking changes and requires careful review. If yes, the breaking changes section in the changelog has been updated.

Attribution

Review checklist

Reminder for reviewers

@eneufeld
Copy link
Contributor

eneufeld commented Jun 4, 2025

@planger we should discuss this.
not having sticky tools breaks the caching see #15731

@sdirix
Copy link
Member

sdirix commented Jun 4, 2025

Personally I think it's fine to break the caching for follow up requests if we save a lot of tokens, e.g. the old content of 20 files which are no longer relevant. This frees up the context window and saves costs. But as always this is use case dependent

Comment on lines +194 to +198
const sticky = (() => {
const found = request.tools?.find(t => t.name === (toolCall as { name?: string }).name);
return found?.sticky ?? 'args';
})();
yield { tool_calls: [{ finished: false, id: toolCall.id, function: { name: toolCall.name, arguments: toolCall.args }, sticky }] };
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did not expect the "sticky" being part of the returned tool call. I assumed we implement this feature by checking the sticky flag of the tool request when mapping the Theia messages to the respective provider messages.

Thinking about it, I guess it makes sense to "raise" the level of the stickiness feature and bringing it into the chat model. This way the actual agents can decide on what to do with the tool calls.

@sdirix
Copy link
Member

sdirix commented Sep 2, 2025

I think we can close this PR. In the meantime all LLM integrations were overhauled to keep all tool results. If the user wants to remove tool results, they need to implement this in their agent.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Waiting on reviewers
Development

Successfully merging this pull request may close these issues.

3 participants