-
Notifications
You must be signed in to change notification settings - Fork 75
Model selection
igardev edited this page Aug 14, 2025
·
2 revisions
At a given time only one model could be selected (no model selected is also possible). If a model is selected, llama-vscode assumes this model is available at the endpoint for this model. If the model is local, the selection of a model starts a llama.cpp server with it.
This way is more clear what models for what will be used.
There are different ways to select a model
- In Llama Agent click the button for selecting a model (completion, chat, embeddings, tools)
- In llama-vscode menu select "Completion models..." (or chat, embeddings, tools)
- Select an env. This will select the models, which are part of the env