You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on May 1, 2025. It is now read-only.
Hi, thank you for your work. I'm trying to use CodeT5+ model types plus-16B and plus-6B. However, when running, I get an error:
ValueError: CodeT5pEncoderDecoderModel does not support "device_map='auto'". To implement support, the modelclass needs to implement the "_no_split_modules" attribute.
The code I'm using is the same as provided in the examples:
from codetf.models import load_model_pipeline
code_generation_model = load_model_pipeline(model_name="codet5", task="pretrained",
model_type="plus-6B", is_eval=True,
load_in_8bit=True, load_in_4bit=False, weight_sharding=False)
result = code_generation_model.predict(["def print_hello_world():"])
print(result)