This node is designed for optimizing the performance of model inference in ComfyUI by leveraging Intel OpenVINO toolkits.
This node can support running model on Intel CPU, GPU and NPU device.You can find more detailed informantion in OpenVINO System Requirements.
Prererquisites
- Install comfy-cli
The recommended installation method is to use the Comfy Registry.
These nodes can be installed via the Comfy Registry.
comfy node registry-install comfyui-openvino
This node can be installed via ComfyUI-Manager in the UI or via the CLI:
comfy node install comfyui-openvino
This node can also be installed manually by copying them into your custom_nodes
folder and then installing dependencies:
cd ComfyUI/custom_nodes
git clone https://github.com/openvino-dev-samples/comfyui_openvino
cd comfyui_openvino
pip install -r requirements.txt
To trigger OpenVINO Node for ComfyUI, you can follow the example as reference:
-
Start a ComfyUI server.
- lanuch from source:
cd ComfyUI python3 main.py --cpu --use-pytorch-cross-attention
- lanuch from comfy-cli:
comfy launch -- --cpu --use-pytorch-cross-attention
-
Prepare a standard workflow in ComfyUI.
-
Add OpenVINO Node.
-
Connect OpenVINO Node with Model/LoRa Loader.
-
Run workflow. Please notice it may need an additional warm-up inference after switching new model.