English | 简体中文
Ollama is a cross-platform inference framework client (MacOS, Windows, Linux) designed for seamless deployment of large language models (LLMs) such as Llama 2, Mistral, Llava, and more. With one-click setup, Ollama enables local execution of LLMs, keeping all interaction data on your own machine to enhance data privacy and security.
This project provides the open-source image product Ollama Inference Framework, which comes pre-installed with the Ollama inference framework and its related runtime environment, along with deployment templates. Follow the user guide to easily enjoy an "out-of-the-box" efficient experience.
System Requirements:
- CPU: 2vCPUs or higher
- RAM: 4GB or more
- Disk: At least 40GB
Register a Huawei Account and Activate Huawei Cloud
Image Specification | Features | Remarks |
---|---|---|
Ollama-v0.9.2-kunpeng | Deployed on Kunpeng Cloud Server + Ubuntu 24.04 64bit / Huawei Cloud EulerOS 2.0 64bit |
- For more issues, contact us via issue or Huawei Cloud Marketplace product support
- Other open-source images can be found at open-source-image-repos
- Fork this repository and submit merge requests
- Synchronize updates to README.md based on your open-source image information