-
Notifications
You must be signed in to change notification settings - Fork 79
Manage tools models
- No servers required
Tools models configurations are stored and could be reused. For simplicity the term "tools models" will be used as a synonim for tools models configurations. Tools models could be for local models (run by llama-vscode) and for externally run servers. They have properties: name, local start command (llama-server command to start a server with this model locally), ai model (as required by the provider), endpoint, is key required
Tools models configurations could be added/deleted/viewed/selected/deselected/added from huggingface/exported/imported
Select "Tools models..." from llama-vscode menu
-
Add local model
Enter the requested properties.
Name, local start command and endpoint are required
Use models, which support tools usage, for example gpt-oss-20b-GGUF -
Add external model
Enter the requested properties.
Name and endpoint are required.
Use models, which support tools usage -
Delete models
Select the model you want to delete from the list and delete it. -
View
Select a model from the list to view all the details for this model -
Selected
Select a model from the list to select it. If the model is a local one (has a command in local start command) a llama.cpp server with this model will be started. Only one Tools model could be selected at a time. -
Deselect
Deselect the currently selected model. If the model is local, the llama.cpp server will be started. -
Add model from huggingface
Enter search words to find a model from huggingface. If the model is selected it will be automatically downloaded (if not yet done) and a llama.cpp server will be started with it. -
Export
A model could be exported as a .json files. This file could be shared with other users, modified if needed and imported again. Select a model to export it. -
Import
A model could be imported from a .json file - select a file to import it.