Skip to content

Releases: ollama/ollama

v0.11.6

20 Aug 21:00
6de6266
Compare
Choose a tag to compare

What's Changed

  • Ollama's app will now switch between chats faster
  • Improved layout of messages in Ollama's app
  • Fixed issue where command prompt would show when Ollama's app detected an old version of Ollama running
  • Improved performance when using flash attention
  • Fixed boundary case when encoding text using BPE

Full Changelog: v0.11.5...v0.11.6

v0.11.5

15 Aug 02:38
f804e8a
Compare
Choose a tag to compare

What's Changed

  • Performance improvements for the gpt-oss models
  • New memory management: this release of Ollama includes improved memory management for scheduling models on GPUs, leading to better VRAM utilization, model performance and less out of memory errors. These new memory estimations can be enabled with OLLAMA_NEW_ESTIMATES=1 ollama serve and will soon be enabled by default.
  • Improved multi-GPU scheduling and reduced VRAM allocation when using more than 2 GPUs
  • Ollama's new app will now remember default selections for default model, Turbo and Web Search between restarts
  • Fix error when parsing bad harmony tool calls
  • OLLAMA_FLASH_ATTENTION=1 will also enable flash attention for pure-CPU models
  • Fixed OpenAI-compatible API not supporting reasoning_effort
  • Reduced size of installation on Windows and Linux

New Contributors

Full Changelog: v0.11.4...v0.11.5

v0.11.4

07 Aug 17:17
Compare
Choose a tag to compare

What's Changed

  • openai: allow for content and tool calls in the same message by @drifkin in #11759
  • openai: when converting role=tool messages, propagate the tool name by @drifkin in #11761
  • openai: always provide reasoning by @drifkin in #11765

New Contributors

Full Changelog: v0.11.3...v0.11.4

v0.11.3

06 Aug 01:29
4742e12
Compare
Choose a tag to compare

What's Changed

  • Fixed issue where gpt-oss would consume too much VRAM when split across GPU & CPU or multiple GPUs
  • Statically link C++ libraries on windows for better compatibility

Full Changelog: v0.11.2...v0.11.3

v0.11.2

05 Aug 21:18
Compare
Choose a tag to compare

What's Changed

  • Fix crash in gpt-oss when using kv cache quanitization
  • Fix gpt-oss bug with "currentDate" not defined

Full Changelog: v0.11.1...v0.11.2

v0.11.0

05 Aug 16:56
Compare
Choose a tag to compare
ollama OpenAI gpt-oss

Welcome OpenAI's gpt-oss models

Ollama partners with OpenAI to bring its latest state-of-the-art open weight models to Ollama. The two models, 20B and 120B, bring a whole new local chat experience, and are designed for powerful reasoning, agentic tasks, and versatile developer use cases.

Feature highlights

  • Agentic capabilities: Use the models’ native capabilities for function calling, web browsing (Ollama is providing a built-in web search that can be optionally enabled to augment the model with the latest information), python tool calls, and structured outputs.
  • Full chain-of-thought: Gain complete access to the model's reasoning process, facilitating easier debugging and increased trust in outputs.
  • Configurable reasoning effort: Easily adjust the reasoning effort (low, medium, high) based on your specific use case and latency needs.
  • Fine-tunable: Fully customize models to your specific use case through parameter fine-tuning.
  • Permissive Apache 2.0 license: Build freely without copyleft restrictions or patent risk—ideal for experimentation, customization, and commercial deployment.

Quantization - MXFP4 format

OpenAI utilizes quantization to reduce the memory footprint of the gpt-oss models. The models are post-trained with quantization of the mixture-of-experts (MoE) weights to MXFP4 format, where the weights are quantized to 4.25 bits per parameter. The MoE weights are responsible for 90+% of the total parameter count, and quantizing these to MXFP4 enables the smaller model to run on systems with as little as 16GB memory, and the larger model to fit on a single 80GB GPU.

Ollama is supporting the MXFP4 format natively without additional quantizations or conversions. New kernels are developed for Ollama’s new engine to support the MXFP4 format.

Ollama collaborated with OpenAI to benchmark against their reference implementations to ensure Ollama’s implementations have the same quality.

Get started

You can get started by downloading the latest Ollama version (v0.11)

The model can be downloaded directly in Ollama’s new app or via the terminal:

ollama run gpt-oss:20b

ollama run gpt-oss:120b

What's Changed

Full Changelog: v0.10.1...v0.11.0

v0.10.1

31 Jul 04:39
ff89ba9
Compare
Choose a tag to compare

What's Changed

  • Fixed unicode character input for Japanese and other languages in Ollama's new app
  • Fixed AMD download URL in the logs for ollama serve

New Contributors

Full Changelog: v0.10.0...v0.10.1

v0.10.0

18 Jul 00:23
6dcc5df
Compare
Choose a tag to compare

Ollama's new app

Ollama's new app is available for macOS and Windows: Download Ollama

ollama's new app

What's Changed

  • ollama ps will now show the context length of loaded models
  • Improved performance in gemma3n models by 2-3x
  • Parallel request processing now defaults to 1. For more details, see the FAQ
  • Fixed issue where tool calling would not work correctly with granite3.3 and mistral-nemo models
  • Fixed issue where Ollama's tool calling would not work correctly if a tool's name was part of of another one, such as add and get_address
  • Improved performance when using multiple GPUs by 10-30%
  • Ollama's OpenAI-compatible API will now support WebP images
  • Fixed issue where ollama show would report an error
  • ollama run will more gracefully display errors

New Contributors

Full Changelog: v0.9.6...v0.10.0

v0.9.6

08 Jul 01:26
43107b1
Compare
Choose a tag to compare

What's Changed

  • Fixed styling issue in launch screen
  • tool_name can now be provided in messages with "role": "tool" using the /api/chat endpoint

New Contributors

Full Changelog: v0.9.5...v0.9.6-rc0

v0.9.5

02 Jul 18:39
5d8c173
Compare
Choose a tag to compare

Updates to Ollama for macOS and Windows

A new version of Ollama's macOS and Windows applications are now available. New improvements to the apps will be introduced over the coming releases:

Screenshot 2025-07-01 at 9 53 31 AM

New features

Expose Ollama on the network

Ollama can now be exposed on the network, allowing others to access Ollama on other devices or even over the internet. This is useful for having Ollama running on a powerful Mac, PC or Linux computer while making it accessible to less powerful devices.

Model directory

The directory in which models are stored can now be modified! This allows models to be stored on external hard disks or alternative directories than the default.

Smaller footprint and faster starting on macOS

The macOS app is now a native application and starts much faster while requiring a much smaller installation.

Additional changes in 0.9.5

  • Fixed issue where the ollama CLI would not be installed by Ollama on macOS on startup
  • Fixed issue where files in ollama-darwin.tgz were not notarized
  • Add NativeMind to Community Integrations by @xukecheng in #11242
  • Ollama for macOS now requires version 12 (Monterey) or newer

New Contributors