Skip to content

OLLAMA_KEEP_ALIVE is valid but the GPU is not actually permanently loaded #9410

@moluzhui

Description

@moluzhui

What is the issue?

I use ollama on H200 server, I install cuda 12.4. OS is Centos7. The OLLAMA_KEEP_ALIVE=-1 environment variable is configured in systemd. I want the model to load forever to reduce load time when in use.

After I answer the question several times with the LLM framework (fastGPT), there was no GPU process, even though forever was displayed.

PS: The answer to the first question is slow, and the GPU shows that it is used, but it is not used after that, and the question and answer is stuck

How can I fix this problem so that the model permanently loads in the GPU and the Q&A is faster and faster?

ollama command

# ollama ps
NAME                ID              SIZE      PROCESSOR    UNTIL
deepseek-r1:671b    739e1b229ad7    482 GB    100% GPU     Forever

nvdia command

# nvidia-smi
Fri Feb 28 14:33:27 2025
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.127.08             Driver Version: 550.127.08     CUDA Version: 12.4     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA H20                     On  |   00000000:08:00.0 Off |                    0 |
| N/A   32C    P0             72W /  500W |       4MiB /  97871MiB |      0%      Default |
|                                         |                        |             Disabled |
+-----------------------------------------+------------------------+----------------------+
|   1  NVIDIA H20                     On  |   00000000:7E:00.0 Off |                    0 |
| N/A   29C    P0             72W /  500W |       4MiB /  97871MiB |      0%      Default |
|                                         |                        |             Disabled |
+-----------------------------------------+------------------------+----------------------+
|   2  NVIDIA H20                     On  |   00000000:A2:00.0 Off |                    0 |
| N/A   33C    P0             72W /  500W |       4MiB /  97871MiB |      0%      Default |
|                                         |                        |             Disabled |
+-----------------------------------------+------------------------+----------------------+
|   3  NVIDIA H20                     On  |   00000000:C6:00.0 Off |                    0 |
| N/A   31C    P0             74W /  500W |       4MiB /  97871MiB |      0%      Default |
|                                         |                        |             Disabled |
+-----------------------------------------+------------------------+----------------------+
|   4  NVIDIA H20                     On  |   00000001:09:00.0 Off |                    0 |
| N/A   29C    P0             73W /  500W |       4MiB /  97871MiB |      0%      Default |
|                                         |                        |             Disabled |
+-----------------------------------------+------------------------+----------------------+
|   5  NVIDIA H20                     On  |   00000001:7F:00.0 Off |                    0 |
| N/A   32C    P0             73W /  500W |       4MiB /  97871MiB |      0%      Default |
|                                         |                        |             Disabled |
+-----------------------------------------+------------------------+----------------------+
|   6  NVIDIA H20                     On  |   00000001:A3:00.0 Off |                    0 |
| N/A   31C    P0             71W /  500W |       4MiB /  97871MiB |      0%      Default |
|                                         |                        |             Disabled |
+-----------------------------------------+------------------------+----------------------+
|   7  NVIDIA H20                     On  |   00000001:C7:00.0 Off |                    0 |
| N/A   32C    P0             73W /  500W |       4MiB /  97871MiB |      0%      Default |
|                                         |                        |             Disabled |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|  No running processes found                                                             |
+-----------------------------------------------------------------------------------------+

systemd setting

# cat /etc/systemd/system/ollama.service
[Unit]
Description=Ollama Service
After=network-online.target

[Service]
ExecStart=/usr/bin/ollama serve
User=root
Group=root
Restart=always
RestartSec=3
Environment="PATH=$PATH"
Environment="OLLAMA_MODELS=/data/ollama/models"
Environment="OLLAMA_HOST=10.2.3.4:11434"
Environment="OLLAMA_SCHED_SPREAD=1"
Environment="OLLAMA_KEEP_ALIVE=-1"


[Install]
WantedBy=default.target

Relevant log output

Feb 28 13:43:00 iZ0jl5att67k7fqmbzp4j2Z systemd[1]: Started Ollama Service.
Feb 28 13:43:00 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: 2025/02/28 13:43:00 routes.go:1205: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://10.2.3.4:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/data/ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:true ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Feb 28 13:43:00 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: time=2025-02-28T13:43:00.478+08:00 level=INFO source=images.go:432 msg="total blobs: 9"
Feb 28 13:43:00 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: time=2025-02-28T13:43:00.479+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
Feb 28 13:43:00 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: time=2025-02-28T13:43:00.479+08:00 level=INFO source=routes.go:1256 msg="Listening on 10.2.3.4:11434 (version 0.5.12)"
Feb 28 13:43:00 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: time=2025-02-28T13:43:00.479+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
Feb 28 13:43:06 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: time=2025-02-28T13:43:06.333+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-f624defb-9354-480d-0e2f-58f9ab6940fe library=cuda variant=v12 compute=9.0 driver=12.4 name="NVIDIA H20" total="95.0 GiB" available="94.7 GiB"
Feb 28 13:43:06 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: time=2025-02-28T13:43:06.333+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-ebdcf37a-6d22-f2a8-0c23-3c0d395cd420 library=cuda variant=v12 compute=9.0 driver=12.4 name="NVIDIA H20" total="95.0 GiB" available="94.7 GiB"
Feb 28 13:43:06 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: time=2025-02-28T13:43:06.333+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-a1a0713b-d54e-c53d-43aa-67bf9758b8ad library=cuda variant=v12 compute=9.0 driver=12.4 name="NVIDIA H20" total="95.0 GiB" available="94.7 GiB"
Feb 28 13:43:06 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: time=2025-02-28T13:43:06.333+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-e2570d2c-768c-ee30-6930-ecb43f7e64ca library=cuda variant=v12 compute=9.0 driver=12.4 name="NVIDIA H20" total="95.0 GiB" available="94.7 GiB"
Feb 28 13:43:06 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: time=2025-02-28T13:43:06.333+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-afa241ba-0747-2a71-61bd-5e11350ebef0 library=cuda variant=v12 compute=9.0 driver=12.4 name="NVIDIA H20" total="95.0 GiB" available="94.7 GiB"
Feb 28 13:43:06 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: time=2025-02-28T13:43:06.333+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-0557d5d9-e43c-4e83-6bf3-ff3d12be3e4b library=cuda variant=v12 compute=9.0 driver=12.4 name="NVIDIA H20" total="95.0 GiB" available="94.7 GiB"
Feb 28 13:43:06 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: time=2025-02-28T13:43:06.333+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-cb672f62-303b-b89e-c52c-3c1f90878516 library=cuda variant=v12 compute=9.0 driver=12.4 name="NVIDIA H20" total="95.0 GiB" available="94.7 GiB"
Feb 28 13:43:06 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: time=2025-02-28T13:43:06.333+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-21e59594-154b-6855-1e97-1675672388e4 library=cuda variant=v12 compute=9.0 driver=12.4 name="NVIDIA H20" total="95.0 GiB" available="94.7 GiB"
Feb 28 13:43:22 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: [GIN] 2025/02/28 - 13:43:22 | 200 |     103.555µs |  10.2.3.4 | HEAD     "/"
Feb 28 13:43:22 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: [GIN] 2025/02/28 - 13:43:22 | 200 |      253.73µs |  10.2.3.4 | GET      "/api/ps"
Feb 28 13:43:42 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: [GIN] 2025/02/28 - 13:43:42 | 200 |      29.524µs |  10.2.3.4 | HEAD     "/"
Feb 28 13:43:42 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: [GIN] 2025/02/28 - 13:43:42 | 200 |      37.605µs |  10.2.3.4 | GET      "/api/ps"
Feb 28 13:43:52 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: [GIN] 2025/02/28 - 13:43:52 | 200 |      22.637µs |  10.2.3.4 | HEAD     "/"
Feb 28 13:43:52 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: [GIN] 2025/02/28 - 13:43:52 | 200 |    2.563092ms |  10.2.3.4 | GET      "/api/tags"
Feb 28 13:44:52 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: [GIN] 2025/02/28 - 13:44:52 | 200 |      27.836µs |  10.2.3.4 | HEAD     "/"
Feb 28 13:44:52 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: [GIN] 2025/02/28 - 13:44:52 | 200 |      18.987µs |  10.2.3.4 | GET      "/api/ps"
Feb 28 13:44:52 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: [GIN] 2025/02/28 - 13:44:52 | 200 |      19.456µs |  10.2.3.4 | HEAD     "/"
Feb 28 13:44:52 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: [GIN] 2025/02/28 - 13:44:52 | 200 |       6.507µs |  10.2.3.4 | GET      "/api/ps"
Feb 28 13:44:53 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: [GIN] 2025/02/28 - 13:44:53 | 200 |      23.589µs |  10.2.3.4 | HEAD     "/"
Feb 28 13:44:53 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: [GIN] 2025/02/28 - 13:44:53 | 200 |      12.961µs |  10.2.3.4 | GET      "/api/ps"
Feb 28 13:45:07 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: [GIN] 2025/02/28 - 13:45:07 | 200 |      61.285µs |  10.2.3.4 | HEAD     "/"
Feb 28 13:45:07 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: [GIN] 2025/02/28 - 13:45:07 | 200 |      37.451µs |  10.2.3.4 | GET      "/api/ps"
Feb 28 13:45:07 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: time=2025-02-28T13:45:07.372+08:00 level=INFO source=sched.go:731 msg="new model will fit in available VRAM, loading" model=/data/ollama/models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 library=cuda parallel=4 required="449.4 GiB"
Feb 28 13:45:07 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: [GIN] 2025/02/28 - 13:45:07 | 200 |      30.814µs |  10.2.3.4 | HEAD     "/"
Feb 28 13:45:07 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: [GIN] 2025/02/28 - 13:45:07 | 200 |      26.609µs |  10.2.3.4 | GET      "/api/ps"
Feb 28 13:45:10 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: time=2025-02-28T13:45:10.399+08:00 level=INFO source=server.go:97 msg="system memory" total="1007.3 GiB" free="992.2 GiB" free_swap="0 B"
Feb 28 13:45:10 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: time=2025-02-28T13:45:10.400+08:00 level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=62 layers.offload=62 layers.split=8,8,8,8,8,8,7,7 memory.available="[94.7 GiB 94.7 GiB 94.7 GiB 94.7 GiB 94.7 GiB 94.7 GiB 94.7 GiB 94.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="449.4 GiB" memory.required.partial="449.4 GiB" memory.required.kv="38.1 GiB" memory.required.allocations="[54.7 GiB 54.7 GiB 54.7 GiB 61.3 GiB 61.3 GiB 55.3 GiB 53.7 GiB 53.7 GiB]" memory.weights.total="413.6 GiB" memory.weights.repeating="412.9 GiB" memory.weights.nonrepeating="725.0 MiB" memory.graph.full="3.0 GiB" memory.graph.partial="3.0 GiB"
Feb 28 13:45:10 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: time=2025-02-28T13:45:10.403+08:00 level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/bin/ollama runner --model /data/ollama/models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 --ctx-size 8192 --batch-size 512 --n-gpu-layers 62 --threads 96 --parallel 4 --tensor-split 8,8,8,8,8,8,7,7 --port 18494"
Feb 28 13:45:10 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: time=2025-02-28T13:45:10.403+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
Feb 28 13:45:10 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: time=2025-02-28T13:45:10.403+08:00 level=INFO source=server.go:557 msg="waiting for llama runner to start responding"
Feb 28 13:45:10 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: time=2025-02-28T13:45:10.403+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error"
Feb 28 13:45:10 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: time=2025-02-28T13:45:10.420+08:00 level=INFO source=runner.go:932 msg="starting go runner"
Feb 28 13:45:10 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Feb 28 13:45:10 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Feb 28 13:45:10 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: ggml_cuda_init: found 8 CUDA devices:
Feb 28 13:45:10 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: Device 0: NVIDIA H20, compute capability 9.0, VMM: yes
Feb 28 13:45:10 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: Device 1: NVIDIA H20, compute capability 9.0, VMM: yes
Feb 28 13:45:10 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: Device 2: NVIDIA H20, compute capability 9.0, VMM: yes
Feb 28 13:45:10 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: Device 3: NVIDIA H20, compute capability 9.0, VMM: yes
Feb 28 13:45:10 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: Device 4: NVIDIA H20, compute capability 9.0, VMM: yes
Feb 28 13:45:10 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: Device 5: NVIDIA H20, compute capability 9.0, VMM: yes
Feb 28 13:45:10 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: Device 6: NVIDIA H20, compute capability 9.0, VMM: yes
Feb 28 13:45:10 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: Device 7: NVIDIA H20, compute capability 9.0, VMM: yes
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: time=2025-02-28T13:45:12.573+08:00 level=INFO source=runner.go:935 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | LLAMAFILE = 1 | cgo(gcc)" threads=96
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: time=2025-02-28T13:45:12.574+08:00 level=INFO source=runner.go:993 msg="Server listening on 127.0.0.1:18494"
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: time=2025-02-28T13:45:12.699+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model"
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_load_model_from_file: using device CUDA0 (NVIDIA H20) - 96943 MiB free
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_load_model_from_file: using device CUDA1 (NVIDIA H20) - 96943 MiB free
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_load_model_from_file: using device CUDA2 (NVIDIA H20) - 96943 MiB free
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_load_model_from_file: using device CUDA3 (NVIDIA H20) - 96943 MiB free
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_load_model_from_file: using device CUDA4 (NVIDIA H20) - 96943 MiB free
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_load_model_from_file: using device CUDA5 (NVIDIA H20) - 96943 MiB free
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_load_model_from_file: using device CUDA6 (NVIDIA H20) - 96943 MiB free
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_load_model_from_file: using device CUDA7 (NVIDIA H20) - 96943 MiB free
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: loaded meta data with 42 key-value pairs and 1025 tensors from /data/ollama/models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 (version GGUF V3 (latest))
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv   0:                       general.architecture str              = deepseek2
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv   1:                               general.type str              = model
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv   2:                         general.size_label str              = 256x20B
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv   3:                      deepseek2.block_count u32              = 61
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv   4:                   deepseek2.context_length u32              = 163840
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv   5:                 deepseek2.embedding_length u32              = 7168
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv   6:              deepseek2.feed_forward_length u32              = 18432
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv   7:             deepseek2.attention.head_count u32              = 128
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv   8:          deepseek2.attention.head_count_kv u32              = 128
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv   9:                   deepseek2.rope.freq_base f32              = 10000.000000
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  10: deepseek2.attention.layer_norm_rms_epsilon f32              = 0.000001
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  11:                deepseek2.expert_used_count u32              = 8
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  12:        deepseek2.leading_dense_block_count u32              = 3
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  13:                       deepseek2.vocab_size u32              = 129280
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  14:            deepseek2.attention.q_lora_rank u32              = 1536
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  15:           deepseek2.attention.kv_lora_rank u32              = 512
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  16:             deepseek2.attention.key_length u32              = 192
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  17:           deepseek2.attention.value_length u32              = 128
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  18:       deepseek2.expert_feed_forward_length u32              = 2048
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  19:                     deepseek2.expert_count u32              = 256
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  20:              deepseek2.expert_shared_count u32              = 1
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  21:             deepseek2.expert_weights_scale f32              = 2.500000
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  22:              deepseek2.expert_weights_norm bool             = true
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  23:               deepseek2.expert_gating_func u32              = 2
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  24:             deepseek2.rope.dimension_count u32              = 64
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  25:                deepseek2.rope.scaling.type str              = yarn
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  26:              deepseek2.rope.scaling.factor f32              = 40.000000
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  27: deepseek2.rope.scaling.original_context_length u32              = 4096
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  28: deepseek2.rope.scaling.yarn_log_multiplier f32              = 0.100000
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  29:                       tokenizer.ggml.model str              = gpt2
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  30:                         tokenizer.ggml.pre str              = deepseek-v3
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: [132B blob data]
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  32:                  tokenizer.ggml.token_type arr[i32,129280]  = [3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  33:                      tokenizer.ggml.merges arr[str,127741]  = ["Ġ t", "Ġ a", "i n", "Ġ Ġ", "h e...
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  34:                tokenizer.ggml.bos_token_id u32              = 0
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  35:                tokenizer.ggml.eos_token_id u32              = 1
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  36:            tokenizer.ggml.padding_token_id u32              = 1
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  37:               tokenizer.ggml.add_bos_token bool             = true
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  38:               tokenizer.ggml.add_eos_token bool             = false
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  39:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  40:               general.quantization_version u32              = 2
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  41:                          general.file_type u32              = 15
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - type  f32:  361 tensors
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - type q4_K:  606 tensors
Feb 28 13:45:12 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - type q6_K:   58 tensors
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_vocab: special tokens cache size = 818
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_vocab: token to piece cache size = 0.8223 MB
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: format           = GGUF V3 (latest)
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: arch             = deepseek2
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: vocab type       = BPE
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: n_vocab          = 129280
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: n_merges         = 127741
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: vocab_only       = 0
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: n_ctx_train      = 163840
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: n_embd           = 7168
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: n_layer          = 61
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: n_head           = 128
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: n_head_kv        = 128
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: n_rot            = 64
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: n_swa            = 0
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: n_embd_head_k    = 192
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: n_embd_head_v    = 128
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: n_gqa            = 1
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: n_embd_k_gqa     = 24576
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: n_embd_v_gqa     = 16384
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: f_norm_eps       = 0.0e+00
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: f_clamp_kqv      = 0.0e+00
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: f_logit_scale    = 0.0e+00
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: n_ff             = 18432
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: n_expert         = 256
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: n_expert_used    = 8
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: causal attn      = 1
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: pooling type     = 0
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: rope type        = 0
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: rope scaling     = yarn
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: freq_base_train  = 10000.0
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: freq_scale_train = 0.025
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: n_ctx_orig_yarn  = 4096
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: rope_finetuned   = unknown
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: ssm_d_conv       = 0
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: ssm_d_inner      = 0
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: ssm_d_state      = 0
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: ssm_dt_rank      = 0
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: ssm_dt_b_c_rms   = 0
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: model type       = 671B
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: model ftype      = Q4_K - Medium
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: model params     = 671.03 B
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: model size       = 376.65 GiB (4.82 BPW)
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: general.name     = n/a
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: BOS token        = 0 '<|begin▁of▁sentence|>'
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: EOS token        = 1 '<|end▁of▁sentence|>'
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: EOT token        = 1 '<|end▁of▁sentence|>'
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: PAD token        = 1 '<|end▁of▁sentence|>'
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: LF token         = 131 'Ä'
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: FIM PRE token    = 128801 '<|fim▁begin|>'
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: FIM SUF token    = 128800 '<|fim▁hole|>'
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: FIM MID token    = 128802 '<|fim▁end|>'
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: EOG token        = 1 '<|end▁of▁sentence|>'
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: max token length = 256
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: n_layer_dense_lead   = 3
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: n_lora_q             = 1536
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: n_lora_kv            = 512
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: n_ff_exp             = 2048
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: n_expert_shared      = 1
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: expert_weights_scale = 2.5
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: expert_weights_norm  = 1
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: expert_gating_func   = sigmoid
Feb 28 13:45:13 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: rope_yarn_log_mul    = 0.1000
Feb 28 13:45:19 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: [GIN] 2025/02/28 - 13:45:19 | 200 |      23.856µs |  10.2.3.4 | HEAD     "/"
Feb 28 13:45:19 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: [GIN] 2025/02/28 - 13:45:19 | 200 |       49.24µs |  10.2.3.4 | GET      "/api/ps"
Feb 28 13:45:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: [GIN] 2025/02/28 - 13:45:20 | 200 |      26.763µs |  10.2.3.4 | HEAD     "/"
Feb 28 13:45:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: [GIN] 2025/02/28 - 13:45:20 | 200 |      35.246µs |  10.2.3.4 | GET      "/api/ps"
Feb 28 13:45:21 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: [GIN] 2025/02/28 - 13:45:21 | 200 |      25.022µs |  10.2.3.4 | HEAD     "/"
Feb 28 13:45:21 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: [GIN] 2025/02/28 - 13:45:21 | 200 |      35.564µs |  10.2.3.4 | GET      "/api/ps"
Feb 28 13:45:26 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: time=2025-02-28T13:45:26.934+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server not responding"
Feb 28 13:45:27 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: [GIN] 2025/02/28 - 13:45:27 | 200 |      35.198µs |  10.2.3.4 | HEAD     "/"
Feb 28 13:45:27 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: [GIN] 2025/02/28 - 13:45:27 | 200 |      24.848µs |  10.2.3.4 | GET      "/api/ps"
Feb 28 13:45:34 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: time=2025-02-28T13:45:34.522+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model"
Feb 28 13:45:34 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_tensors: offloading 61 repeating layers to GPU
Feb 28 13:45:34 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_tensors: offloading output layer to GPU
Feb 28 13:45:34 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_tensors: offloaded 62/62 layers to GPU
Feb 28 13:45:34 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_tensors:        CUDA0 model buffer size = 35642.36 MiB
Feb 28 13:45:34 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_tensors:        CUDA1 model buffer size = 52215.30 MiB
Feb 28 13:45:34 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_tensors:        CUDA2 model buffer size = 51287.70 MiB
Feb 28 13:45:34 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_tensors:        CUDA3 model buffer size = 52215.30 MiB
Feb 28 13:45:34 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_tensors:        CUDA4 model buffer size = 52215.30 MiB
Feb 28 13:45:34 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_tensors:        CUDA5 model buffer size = 51287.70 MiB
Feb 28 13:45:34 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_tensors:        CUDA6 model buffer size = 46963.85 MiB
Feb 28 13:45:34 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_tensors:        CUDA7 model buffer size = 43364.99 MiB
Feb 28 13:45:34 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_tensors:   CPU_Mapped model buffer size =   497.11 MiB
Feb 28 13:45:56 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: [GIN] 2025/02/28 - 13:45:56 | 200 |      26.547µs |  10.2.3.4 | HEAD     "/"
Feb 28 13:45:56 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: [GIN] 2025/02/28 - 13:45:56 | 200 |      34.019µs |  10.2.3.4 | GET      "/api/ps"
Feb 28 13:46:25 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: time=2025-02-28T13:46:25.362+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server not responding"
Feb 28 13:46:30 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: [GIN] 2025/02/28 - 13:46:30 | 200 |       23.09µs |  10.2.3.4 | HEAD     "/"
Feb 28 13:46:30 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: [GIN] 2025/02/28 - 13:46:30 | 200 |      59.492µs |  10.2.3.4 | GET      "/api/ps"
Feb 28 13:46:30 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: time=2025-02-28T13:46:30.674+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model"
Feb 28 13:46:30 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_new_context_with_model: n_seq_max     = 4
Feb 28 13:46:30 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_new_context_with_model: n_ctx         = 8192
Feb 28 13:46:30 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_new_context_with_model: n_ctx_per_seq = 2048
Feb 28 13:46:30 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_new_context_with_model: n_batch       = 2048
Feb 28 13:46:30 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_new_context_with_model: n_ubatch      = 512
Feb 28 13:46:30 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_new_context_with_model: flash_attn    = 0
Feb 28 13:46:30 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_new_context_with_model: freq_base     = 10000.0
Feb 28 13:46:30 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_new_context_with_model: freq_scale    = 0.025
Feb 28 13:46:30 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (163840) -- the full capacity of the model will not be utilized
Feb 28 13:46:30 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 61, can_shift = 0
Feb 28 13:46:30 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_kv_cache_init:      CUDA0 KV buffer size =  5120.00 MiB
Feb 28 13:46:30 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_kv_cache_init:      CUDA1 KV buffer size =  5120.00 MiB
Feb 28 13:46:30 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_kv_cache_init:      CUDA2 KV buffer size =  5120.00 MiB
Feb 28 13:46:30 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_kv_cache_init:      CUDA3 KV buffer size =  5120.00 MiB
Feb 28 13:46:30 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_kv_cache_init:      CUDA4 KV buffer size =  5120.00 MiB
Feb 28 13:46:30 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_kv_cache_init:      CUDA5 KV buffer size =  5120.00 MiB
Feb 28 13:46:30 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_kv_cache_init:      CUDA6 KV buffer size =  4480.00 MiB
Feb 28 13:46:30 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_kv_cache_init:      CUDA7 KV buffer size =  3840.00 MiB
Feb 28 13:46:30 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_new_context_with_model: KV self size  = 39040.00 MiB, K (f16): 23424.00 MiB, V (f16): 15616.00 MiB
Feb 28 13:46:30 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_new_context_with_model:  CUDA_Host  output buffer size =     2.08 MiB
Feb 28 13:46:30 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_new_context_with_model: pipeline parallelism enabled (n_copies=4)
Feb 28 13:46:30 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_new_context_with_model:      CUDA0 compute buffer size =  2322.01 MiB
Feb 28 13:46:30 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_new_context_with_model:      CUDA1 compute buffer size =  2322.01 MiB
Feb 28 13:46:30 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_new_context_with_model:      CUDA2 compute buffer size =  2322.01 MiB
Feb 28 13:46:30 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_new_context_with_model:      CUDA3 compute buffer size =  2322.01 MiB
Feb 28 13:46:30 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_new_context_with_model:      CUDA4 compute buffer size =  2322.01 MiB
Feb 28 13:46:30 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_new_context_with_model:      CUDA5 compute buffer size =  2322.01 MiB
Feb 28 13:46:30 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_new_context_with_model:      CUDA6 compute buffer size =  2322.01 MiB
Feb 28 13:46:30 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_new_context_with_model:      CUDA7 compute buffer size =  2322.02 MiB
Feb 28 13:46:30 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_new_context_with_model:  CUDA_Host compute buffer size =    78.02 MiB
Feb 28 13:46:30 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_new_context_with_model: graph nodes  = 5025
Feb 28 13:46:30 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_new_context_with_model: graph splits = 9
Feb 28 13:46:30 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: time=2025-02-28T13:46:30.925+08:00 level=INFO source=server.go:596 msg="llama runner started in 80.52 seconds"
Feb 28 13:46:33 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: [GIN] 2025/02/28 - 13:46:33 | 200 |         1m28s |    10.219.32.13 | POST     "/api/chat"
Feb 28 13:46:33 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: [GIN] 2025/02/28 - 13:46:33 | 200 |          1m9s |    10.219.32.13 | POST     "/api/chat"
Feb 28 13:46:33 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: [GIN] 2025/02/28 - 13:46:33 | 200 | 10.738479014s |    10.219.32.13 | POST     "/api/chat"
Feb 28 13:46:37 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: [GIN] 2025/02/28 - 13:46:37 | 200 |  1.594152068s |    10.219.32.13 | POST     "/api/chat"
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: loaded meta data with 42 key-value pairs and 1025 tensors from /data/ollama/models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 (version GGUF V3 (latest))
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv   0:                       general.architecture str              = deepseek2
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv   1:                               general.type str              = model
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv   2:                         general.size_label str              = 256x20B
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv   3:                      deepseek2.block_count u32              = 61
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv   4:                   deepseek2.context_length u32              = 163840
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv   5:                 deepseek2.embedding_length u32              = 7168
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv   6:              deepseek2.feed_forward_length u32              = 18432
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv   7:             deepseek2.attention.head_count u32              = 128
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv   8:          deepseek2.attention.head_count_kv u32              = 128
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv   9:                   deepseek2.rope.freq_base f32              = 10000.000000
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  10: deepseek2.attention.layer_norm_rms_epsilon f32              = 0.000001
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  11:                deepseek2.expert_used_count u32              = 8
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  12:        deepseek2.leading_dense_block_count u32              = 3
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  13:                       deepseek2.vocab_size u32              = 129280
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  14:            deepseek2.attention.q_lora_rank u32              = 1536
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  15:           deepseek2.attention.kv_lora_rank u32              = 512
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  16:             deepseek2.attention.key_length u32              = 192
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  17:           deepseek2.attention.value_length u32              = 128
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  18:       deepseek2.expert_feed_forward_length u32              = 2048
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  19:                     deepseek2.expert_count u32              = 256
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  20:              deepseek2.expert_shared_count u32              = 1
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  21:             deepseek2.expert_weights_scale f32              = 2.500000
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  22:              deepseek2.expert_weights_norm bool             = true
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  23:               deepseek2.expert_gating_func u32              = 2
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  24:             deepseek2.rope.dimension_count u32              = 64
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  25:                deepseek2.rope.scaling.type str              = yarn
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  26:              deepseek2.rope.scaling.factor f32              = 40.000000
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  27: deepseek2.rope.scaling.original_context_length u32              = 4096
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  28: deepseek2.rope.scaling.yarn_log_multiplier f32              = 0.100000
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  29:                       tokenizer.ggml.model str              = gpt2
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  30:                         tokenizer.ggml.pre str              = deepseek-v3
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: [132B blob data]
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  32:                  tokenizer.ggml.token_type arr[i32,129280]  = [3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  33:                      tokenizer.ggml.merges arr[str,127741]  = ["Ġ t", "Ġ a", "i n", "Ġ Ġ", "h e...
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  34:                tokenizer.ggml.bos_token_id u32              = 0
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  35:                tokenizer.ggml.eos_token_id u32              = 1
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  36:            tokenizer.ggml.padding_token_id u32              = 1
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  37:               tokenizer.ggml.add_bos_token bool             = true
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  38:               tokenizer.ggml.add_eos_token bool             = false
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  39:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  40:               general.quantization_version u32              = 2
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - kv  41:                          general.file_type u32              = 15
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - type  f32:  361 tensors
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - type q4_K:  606 tensors
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_loader: - type q6_K:   58 tensors
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
Feb 28 13:47:20 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_vocab: special tokens cache size = 818
Feb 28 13:47:21 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_vocab: token to piece cache size = 0.8223 MB
Feb 28 13:47:21 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: format           = GGUF V3 (latest)
Feb 28 13:47:21 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: arch             = deepseek2
Feb 28 13:47:21 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: vocab type       = BPE
Feb 28 13:47:21 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: n_vocab          = 129280
Feb 28 13:47:21 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: n_merges         = 127741
Feb 28 13:47:21 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: vocab_only       = 1
Feb 28 13:47:21 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: model type       = ?B
Feb 28 13:47:21 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: model ftype      = all F32
Feb 28 13:47:21 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: model params     = 671.03 B
Feb 28 13:47:21 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: model size       = 376.65 GiB (4.82 BPW)
Feb 28 13:47:21 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: general.name     = n/a
Feb 28 13:47:21 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: BOS token        = 0 '<|begin▁of▁sentence|>'
Feb 28 13:47:21 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: EOS token        = 1 '<|end▁of▁sentence|>'
Feb 28 13:47:21 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: EOT token        = 1 '<|end▁of▁sentence|>'
Feb 28 13:47:21 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: PAD token        = 1 '<|end▁of▁sentence|>'
Feb 28 13:47:21 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: LF token         = 131 'Ä'
Feb 28 13:47:21 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: FIM PRE token    = 128801 '<|fim▁begin|>'
Feb 28 13:47:21 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: FIM SUF token    = 128800 '<|fim▁hole|>'
Feb 28 13:47:21 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: FIM MID token    = 128802 '<|fim▁end|>'
Feb 28 13:47:21 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: EOG token        = 1 '<|end▁of▁sentence|>'
Feb 28 13:47:21 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: max token length = 256
Feb 28 13:47:21 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: n_layer_dense_lead   = 0
Feb 28 13:47:21 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: n_lora_q             = 0
Feb 28 13:47:21 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: n_lora_kv            = 0
Feb 28 13:47:21 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: n_ff_exp             = 0
Feb 28 13:47:21 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: n_expert_shared      = 0
Feb 28 13:47:21 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: expert_weights_scale = 0.0
Feb 28 13:47:21 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: expert_weights_norm  = 0
Feb 28 13:47:21 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: expert_gating_func   = unknown
Feb 28 13:47:21 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llm_load_print_meta: rope_yarn_log_mul    = 0.0000
Feb 28 13:47:21 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama_model_load: vocab only - skipping tensors
Feb 28 13:47:22 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: [GIN] 2025/02/28 - 13:47:22 | 200 |         1m34s |    10.219.32.13 | POST     "/api/chat"
Feb 28 13:47:24 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: [GIN] 2025/02/28 - 13:47:24 | 200 |  1.622668121s |    10.219.32.13 | POST     "/api/chat"
Feb 28 13:47:35 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: [GIN] 2025/02/28 - 13:47:35 | 200 |   4.21165043s |    10.219.32.13 | POST     "/api/chat"
Feb 28 13:48:03 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: [GIN] 2025/02/28 - 13:48:03 | 200 | 42.946638091s |    10.219.32.13 | POST     "/api/chat"
Feb 28 13:48:53 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: [GIN] 2025/02/28 - 13:48:53 | 200 |          1m5s |    10.219.32.13 | POST     "/api/chat"
Feb 28 13:49:59 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: [GIN] 2025/02/28 - 13:49:59 | 200 |      32.444µs |  10.2.3.4 | HEAD     "/"
Feb 28 13:49:59 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: [GIN] 2025/02/28 - 13:49:59 | 200 |      42.169µs |  10.2.3.4 | GET      "/api/ps"
Feb 28 13:52:01 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: [GIN] 2025/02/28 - 13:52:01 | 200 |         1m32s |    10.219.32.13 | POST     "/api/chat"
Feb 28 13:53:08 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: [GIN] 2025/02/28 - 13:53:08 | 200 |          1m4s |    10.219.32.13 | POST     "/api/chat"
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: llama.cpp:11942: The current context does not support K-shift
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: SIGSEGV: segmentation violation
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: PC=0x7f6748da6c47 m=11 sigcode=1 addr=0x22a403f8c
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: signal arrived during cgo execution
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: goroutine 198 gp=0xc000685340 m=11 mp=0xc000280e08 [syscall]:
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: runtime.cgocall(0x562e5ac42ce0, 0xc000999ba0)
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: runtime/cgocall.go:167 +0x4b fp=0xc000999b78 sp=0xc000999b40 pc=0x562e5a02dacb
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: github.com/ollama/ollama/llama._Cfunc_llama_decode(0x7f6750a0a930, {0x2, 0x7f6751574960, 0x0, 0x0, 0x7f6751576970, 0x7f6751578980, 0x7f675157a990, 0x7f675157e9a0})
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: _cgo_gotypes.go:545 +0x4f fp=0xc000999ba0 sp=0xc000999b78 pc=0x562e5a3e356f
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: github.com/ollama/ollama/llama.(*Context).Decode.func1(0x562e5a40248b?, 0x7f6750a0a930?)
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: github.com/ollama/ollama/llama/llama.go:163 +0xf5 fp=0xc000999c90 sp=0xc000999ba0 pc=0x562e5a3e6295
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: github.com/ollama/ollama/llama.(*Context).Decode(0x562e5bb36480?, 0x0?)
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: github.com/ollama/ollama/llama/llama.go:163 +0x13 fp=0xc000999cd8 sp=0xc000999c90 pc=0x562e5a3e6113
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: github.com/ollama/ollama/runner/llamarunner.(*Server).processBatch(0xc0003638c0, 0xc000c10000, 0xc000999f20)
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: github.com/ollama/ollama/runner/llamarunner/runner.go:435 +0x23f fp=0xc000999ee0 sp=0xc000999cd8 pc=0x562e5a40127f
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: github.com/ollama/ollama/runner/llamarunner.(*Server).run(0xc0003638c0, {0x562e5b295920, 0xc0007196d0})
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: github.com/ollama/ollama/runner/llamarunner/runner.go:343 +0x1d5 fp=0xc000999fb8 sp=0xc000999ee0 pc=0x562e5a400cb5
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: github.com/ollama/ollama/runner/llamarunner.Execute.gowrap2()
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: github.com/ollama/ollama/runner/llamarunner/runner.go:973 +0x28 fp=0xc000999fe0 sp=0xc000999fb8 pc=0x562e5a405b48
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: runtime.goexit({})
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: runtime/asm_amd64.s:1700 +0x1 fp=0xc000999fe8 sp=0xc000999fe0 pc=0x562e5a03c5a1
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: github.com/ollama/ollama/runner/llamarunner/runner.go:973 +0xdb5
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: goroutine 1 gp=0xc0000061c0 m=nil [IO wait, 3 minutes]:
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: runtime/proc.go:424 +0xce fp=0xc000d0f5c0 sp=0xc000d0f5a0 pc=0x562e5a0341ce
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: runtime.netpollblock(0xc000d0f610?, 0x59fcafe6?, 0x2e?)
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: runtime/netpoll.go:575 +0xf7 fp=0xc000d0f5f8 sp=0xc000d0f5c0 pc=0x562e59ff7e37
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: internal/poll.runtime_pollWait(0x7f675c4a6640, 0x72)
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: runtime/netpoll.go:351 +0x85 fp=0xc000d0f618 sp=0xc000d0f5f8 pc=0x562e5a0334c5
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: internal/poll.(*pollDesc).wait(0xc00075a680?, 0x900000036?, 0x0)
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc000d0f640 sp=0xc000d0f618 pc=0x562e5a0bb707
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: internal/poll.(*pollDesc).waitRead(...)
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: internal/poll/fd_poll_runtime.go:89
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: internal/poll.(*FD).Accept(0xc00075a680)
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: internal/poll/fd_unix.go:620 +0x295 fp=0xc000d0f6e8 sp=0xc000d0f640 pc=0x562e5a0c0ad5
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: net.(*netFD).accept(0xc00075a680)
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: net/fd_unix.go:172 +0x29 fp=0xc000d0f7a0 sp=0xc000d0f6e8 pc=0x562e5a129bc9
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: net.(*TCPListener).accept(0xc000715b80)
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: net/tcpsock_posix.go:159 +0x1e fp=0xc000d0f7f0 sp=0xc000d0f7a0 pc=0x562e5a13f83e
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: net.(*TCPListener).Accept(0xc000715b80)
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: net/tcpsock.go:372 +0x30 fp=0xc000d0f820 sp=0xc000d0f7f0 pc=0x562e5a13e6f0
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: net/http.(*onceCloseListener).Accept(0xc00031a090?)
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: <autogenerated>:1 +0x24 fp=0xc000d0f838 sp=0xc000d0f820 pc=0x562e5a388964
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: net/http.(*Server).Serve(0xc000541590, {0x562e5b2934f8, 0xc000715b80})
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: net/http/server.go:3330 +0x30c fp=0xc000d0f968 sp=0xc000d0f838 pc=0x562e5a3608ec
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: github.com/ollama/ollama/runner/llamarunner.Execute({0xc000036140, 0x10, 0x10})
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: github.com/ollama/ollama/runner/llamarunner/runner.go:994 +0x1174 fp=0xc000d0fd08 sp=0xc000d0f968 pc=0x562e5a405834
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: github.com/ollama/ollama/runner.Execute({0xc000036130?, 0x0?, 0x0?})
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: github.com/ollama/ollama/runner/runner.go:22 +0xd4 fp=0xc000d0fd30 sp=0xc000d0fd08 pc=0x562e5a635c54
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: github.com/ollama/ollama/cmd.NewCLI.func2(0xc000768c00?, {0x562e5ae30050?, 0x4?, 0x562e5ae30054?})
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: github.com/ollama/ollama/cmd/cmd.go:1280 +0x45 fp=0xc000d0fd58 sp=0xc000d0fd30 pc=0x562e5ac42245
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: github.com/spf13/cobra.(*Command).execute(0xc00076c008, {0xc000768e00, 0x10, 0x10})
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: github.com/spf13/cobra@v1.7.0/command.go:940 +0x862 fp=0xc000d0fe78 sp=0xc000d0fd58 pc=0x562e5a1a2902
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: github.com/spf13/cobra.(*Command).ExecuteC(0xc000647808)
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: github.com/spf13/cobra@v1.7.0/command.go:1068 +0x3a5 fp=0xc000d0ff30 sp=0xc000d0fe78 pc=0x562e5a1a3145
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: github.com/spf13/cobra.(*Command).Execute(...)
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: github.com/spf13/cobra@v1.7.0/command.go:992
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: github.com/spf13/cobra.(*Command).ExecuteContext(...)
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: github.com/spf13/cobra@v1.7.0/command.go:985
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: main.main()
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: github.com/ollama/ollama/main.go:12 +0x4d fp=0xc000d0ff50 sp=0xc000d0ff30 pc=0x562e5ac425cd
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: runtime.main()
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: runtime/proc.go:272 +0x29d fp=0xc000d0ffe0 sp=0xc000d0ff50 pc=0x562e59fff4dd
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: runtime.goexit({})
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: runtime/asm_amd64.s:1700 +0x1 fp=0xc000d0ffe8 sp=0xc000d0ffe0 pc=0x562e5a03c5a1
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: goroutine 2 gp=0xc000006c40 m=nil [force gc (idle), 3 minutes]:
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: runtime.gopark(0x4f785b7cbff9?, 0x0?, 0x0?, 0x0?, 0x0?)
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: runtime/proc.go:424 +0xce fp=0xc00021cfa8 sp=0xc00021cf88 pc=0x562e5a0341ce
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: runtime.goparkunlock(...)
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: runtime/proc.go:430
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: runtime.forcegchelper()
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: runtime/proc.go:337 +0xb8 fp=0xc00021cfe0 sp=0xc00021cfa8 pc=0x562e59fff818
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: runtime.goexit({})
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: runtime/asm_amd64.s:1700 +0x1 fp=0xc00021cfe8 sp=0xc00021cfe0 pc=0x562e5a03c5a1
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: created by runtime.init.7 in goroutine 1
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: runtime/proc.go:325 +0x1a
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: goroutine 3 gp=0xc000007180 m=nil [GC sweep wait]:
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: runtime.gopark(0x1?, 0x0?, 0x0?, 0x0?, 0x0?)
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: runtime/proc.go:424 +0xce fp=0xc00021d780 sp=0xc00021d760 pc=0x562e5a0341ce
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: runtime.goparkunlock(...)
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: runtime/proc.go:430
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: runtime.bgsweep(0xc000216080)
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: runtime/mgcsweep.go:317 +0xdf fp=0xc00021d7c8 sp=0xc00021d780 pc=0x562e59fe9ebf
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: runtime.gcenable.gowrap1()
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: runtime/mgc.go:204 +0x25 fp=0xc00021d7e0 sp=0xc00021d7c8 pc=0x562e59fde505
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: runtime.goexit({})
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: runtime/asm_amd64.s:1700 +0x1 fp=0xc00021d7e8 sp=0xc00021d7e0 pc=0x562e5a03c5a1
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: created by runtime.gcenable in goroutine 1
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: runtime/mgc.go:204 +0x66
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: goroutine 4 gp=0xc000007340 m=nil [GC scavenge wait]:
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: runtime.gopark(0x10000?, 0xe6a20?, 0x0?, 0x0?, 0x0?)
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: runtime/proc.go:424 +0xce fp=0xc00021df78 sp=0xc00021df58 pc=0x562e5a0341ce
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: runtime.goparkunlock(...)
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: runtime/proc.go:430
Feb 28 13:55:16 iZ0jl5att67k7fqmbzp4j2Z ollama[189492]: runtime.(*scavengerState).p

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.5.12

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions