Releases: taketwo/llm-ollama
Releases · taketwo/llm-ollama
0.13.0
0.12.0
- Switch to using the official Ollama API to query model capabilities instead of ad hoc heuristics.
Warning: this API was added in Ollama version 0.6.4. - Add/remove options to align with current Ollama modelfile parameters.
- Rename
list-models
plugin subcommand tomodels
and include capabilities in the output.
Example:
$ llm ollama models
model digest capabilities
qwen3:4b 2bfd38a7daaf completion, tools, thinking
snowflake-arctic-embed2:latest 5de93a84837d embedding
gemma3:1b 2d27a774bc62 completion
0.11.0
- Add support for tool usage, see LLM 0.26 release notes
0.11a0
- Add support for tool usage, see LLM 0.26a0 release notes
0.10.0
- Add support for Basic Authentication when connecting to Ollama server.
Example usage:export OLLAMA_HOST=https://username:password@192.168.1.13:11434
- Add caching of model capability detection results. This prevents calling Ollama's
/api/show
endpoint for each model on eachllm
invocation.
0.9.1
0.9.0
0.8.2
- Fix primary model name selection logic to prefer names with longer tags.
- Propagate input/output token usage information to
llm
.
To see token usage, specify-u
option, e.g.:$ llm -u -m llama3.2 "How much is 2+2?" The answer to 2 + 2 is 4. Token usage: 33 input, 13 output