-
Couldn't load subscription status.
- Fork 66
Open
Description
ComfyUI Error Report
Error Details
- Node ID: 2
- Node Type: STATIC_TRT_MODEL_CONVERSION
- Exception Type: TypeError
- Exception Message: a bytes-like object is required, not 'NoneType'
Stack Trace
File "/app/execution.py", line 327, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/execution.py", line 202, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "/app/execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/custom_nodes/ComfyUI_TensorRT/tensorrt_convert.py", line 627, in convert
return super()._convert(
^^^^^^^^^^^^^^^^^
File "/app/custom_nodes/ComfyUI_TensorRT/tensorrt_convert.py", line 384, in _convert
f.write(serialized_engine)
System Information
- ComfyUI Version: 0.3.27
- Arguments: main.py --listen 0.0.0.0
- OS: posix
- Python Version: 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0]
- Embedded Python: false
- PyTorch Version: 2.6.0+cu124
Devices
- Name: cuda:0 Tesla V100-PCIE-16GB : cudaMallocAsync
- Type: cuda
- VRAM Total: 16935682048
- VRAM Free: 16350838784
- Torch VRAM Total: 33554432
- Torch VRAM Free: 25034752
Logs
2025-03-29T22:08:09.119165 - Checkpoint files will always be loaded safely.
2025-03-29T22:08:10.825141 - Total VRAM 16151 MB, total RAM 112686 MB
2025-03-29T22:08:10.825226 - pytorch version: 2.6.0+cu124
2025-03-29T22:08:10.825612 - Set vram state to: NORMAL_VRAM
2025-03-29T22:08:10.825901 - Device: cuda:0 Tesla V100-PCIE-16GB : cudaMallocAsync
2025-03-29T22:08:12.208600 - Using pytorch attention
2025-03-29T22:08:14.280069 - ComfyUI version: 0.3.27
2025-03-29T22:08:14.283359 - ****** User settings have been changed to be stored on the server instead of browser storage. ******
2025-03-29T22:08:14.283417 - ****** For multi-user setups add the --multi-user CLI argument to enable multiple user profiles. ******
2025-03-29T22:08:14.284077 - ComfyUI frontend version: 1.14.6
2025-03-29T22:08:14.284937 - [Prompt Server] web root: /usr/local/lib/python3.12/dist-packages/comfyui_frontend_package/static
2025-03-29T22:08:14.891447 -
Import times for custom nodes:
2025-03-29T22:08:14.891542 - 0.0 seconds: /app/custom_nodes/websocket_image_save.py
2025-03-29T22:08:14.891595 - 0.1 seconds: /app/custom_nodes/ComfyUI_TensorRT
2025-03-29T22:08:14.891640 -
2025-03-29T22:08:14.898721 - Starting server
2025-03-29T22:08:14.899020 - To see the GUI go to: http://0.0.0.0:8188
2025-03-29T22:08:14.899083 - Instrumentation key provided, emitting queue depth metric to azure
2025-03-29T22:08:15.214280 - Queue monitoring started
2025-03-29T22:09:13.334229 - got prompt
2025-03-29T22:09:13.446817 - model weight dtype torch.float16, manual cast: None
2025-03-29T22:09:13.447635 - model_type EPS
2025-03-29T22:09:13.754403 - Using pytorch attention in VAE
2025-03-29T22:09:13.755997 - Using pytorch attention in VAE
2025-03-29T22:09:13.943397 - VAE load device: cuda:0, offload device: cpu, dtype: torch.float32
2025-03-29T22:09:14.021409 - CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
2025-03-29T22:09:14.157284 - Requested to load BaseModel
2025-03-29T22:09:14.425388 - loaded completely 9.5367431640625e+25 1639.406135559082 True
2025-03-29T22:09:14.722331 - /app/comfy/ldm/modules/attention.py:447: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if SDP_BATCH_LIMIT >= b:
2025-03-29T22:09:14.780642 - /app/comfy/ldm/modules/diffusionmodules/openaimodel.py:132: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert x.shape[1] == self.channels
2025-03-29T22:09:15.107188 - /app/comfy/ldm/modules/diffusionmodules/openaimodel.py:90: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert x.shape[1] == self.channels
2025-03-29T22:09:29.260691 - No timing cache found; Initializing a new one.2025-03-29T22:09:29.260727 -
2025-03-29T22:09:29.262557 - !!! Exception during processing !!! a bytes-like object is required, not 'NoneType'
2025-03-29T22:09:29.264426 - Traceback (most recent call last):
File "/app/execution.py", line 327, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/execution.py", line 202, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "/app/execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/custom_nodes/ComfyUI_TensorRT/tensorrt_convert.py", line 627, in convert
return super()._convert(
^^^^^^^^^^^^^^^^^
File "/app/custom_nodes/ComfyUI_TensorRT/tensorrt_convert.py", line 384, in _convert
f.write(serialized_engine)
TypeError: a bytes-like object is required, not 'NoneType'
2025-03-29T22:09:29.265531 - Prompt executed in 15.93 seconds
2025-03-29T22:09:38.229828 - got prompt
2025-03-29T22:09:38.232402 - Requested to load BaseModel
2025-03-29T22:09:38.511050 - loaded completely 9.5367431640625e+25 1639.406135559082 True
2025-03-29T22:09:49.916097 - No timing cache found; Initializing a new one.2025-03-29T22:09:49.916131 -
2025-03-29T22:09:49.917462 - !!! Exception during processing !!! a bytes-like object is required, not 'NoneType'
2025-03-29T22:09:49.918632 - Traceback (most recent call last):
File "/app/execution.py", line 327, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/execution.py", line 202, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "/app/execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/custom_nodes/ComfyUI_TensorRT/tensorrt_convert.py", line 627, in convert
return super()._convert(
^^^^^^^^^^^^^^^^^
File "/app/custom_nodes/ComfyUI_TensorRT/tensorrt_convert.py", line 384, in _convert
f.write(serialized_engine)
TypeError: a bytes-like object is required, not 'NoneType'
2025-03-29T22:09:49.919951 - Prompt executed in 11.69 seconds
Attached Workflow
Please make sure that workflow does not contain any sensitive information such as API keys or passwords.
{"id":"f61e5314-16b8-419f-b32a-4d4b18ba0b52","revision":0,"last_node_id":2,"last_link_id":1,"nodes":[{"id":1,"type":"CheckpointLoaderSimple","pos":[681.5532836914062,263.5812683105469],"size":[315,98],"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[{"localized_name":"MODEL","name":"MODEL","type":"MODEL","links":[1]},{"localized_name":"CLIP","name":"CLIP","type":"CLIP","links":null},{"localized_name":"VAE","name":"VAE","type":"VAE","links":null}],"properties":{"Node name for S&R":"CheckpointLoaderSimple"},"widgets_values":["realisticVisionV51_v51VAE.safetensors"]},{"id":2,"type":"STATIC_TRT_MODEL_CONVERSION","pos":[1108.68310546875,270.84124755859375],"size":[340.20001220703125,178],"flags":{},"order":1,"mode":0,"inputs":[{"localized_name":"model","name":"model","type":"MODEL","link":1}],"outputs":[],"properties":{"Node name for S&R":"STATIC_TRT_MODEL_CONVERSION"},"widgets_values":["tensorrt/realism",1,768,512,1,14]}],"links":[[1,1,0,2,0,"MODEL"]],"groups":[],"config":{},"extra":{"ds":{"scale":0.683013455365071,"offset":[-6.929903795422533,22.632130376695077]}},"version":0.4}
Additional Context
I am getting this error when trying to generate a static TensorRT model on an Nvidia V100. I am able to run this workflow on an identical docker container running on an Nvidia T4.
I am using nvcr.io/nvidia/cuda:12.8.1-cudnn-runtime-ubuntu24.04 as my base image
Metadata
Metadata
Assignees
Labels
No labels