-
Notifications
You must be signed in to change notification settings - Fork 74
Manage embeddings models
- No servers required
Embeddings models configurations are stored and could be reused. For simplicity the term "embeddings models" will be used as a synonim for embeddings models configurations. Embeddings models could be for local models (run by llama-vscode) and for externally run servers. They have properties: name, local start command (llama-server command to start a server with this model locally), ai model (as required by the provider), endpoint, is key required
Embeddings models configurations could be added/deleted/viewed/selected/deselected/added from huggingface/exported/imported
Select "Embeddings models..." from llama-vscode menu
-
Add models
Enter the requested properties.
For local models name, local start command and endpoint are required
For external servers name and endpoint are required
Use models, which support embeddings, for example Nomic-Embed-Text-V2-GGUF -
Delete models
Select the model you want to delete from the list and delete it. -
View
Select a model from the list to view all the details for this model -
Selected
Select a model from the list to select it. If the model is a local one (has a command in local start command) a llama.cpp server with this model will be started. Only one Embeddings model could be selected at a time. -
Deselect
Deselect the currently selected model. If the model is local, the llama.cpp server will be started. -
Add model from huggingface
Enter search words to find a model from huggingface. If the model is selected it will be automatically downloaded (if not yet done) and a llama.cpp server will be started with it. -
Export
A model could be exported as a .json files. This file could be shared with others used, modified if needed and imported again. Select a model to export it. -
Import
A model could be imported from a .json file - select a file to import it.