-
Notifications
You must be signed in to change notification settings - Fork 548
Description
This issue somewhat overlaps with #27. However, I've chosen to create this issue because it was mentioned in the thread of #27 that support for other LLM models would be added by the end of May. Support was seemingly added with commit e849ee9, but I am unable to use several of the engines due to various bugs.
NeMo-Guardrails doesn't seem to work with the following engines:
huggingface_pipeline
huggingface_textgen_inference
gpt4all
I am not confirming whether it works properly with any other engines. I have only tested with these three engines and failed to interact with the model for each engine.
I have included my configuration and output for gpt4all
only. Attempting to use the other two engines above also causes similar issues. If you are able to construct a chatbot with guardrails using any of these engines, please let me know and I will re-evaluate.
Here are my configurations and code:
./config/colang_config.co
:
define user express greeting
"hi"
define bot remove last message
"(remove last message)"
define flow
user ...
bot respond
$updated_msg = execute check_if_constitutional
if $updated_msg != $last_bot_message
bot remove last message
bot $updated_msg
# Basic guardrail against insults.
define flow
user express insult
bot express calmly willingness to help
./config/yaml_config.yaml
models:
- type: main
engine: gpt4all
model: gpt4all-j-v1.3-groovy
./demo_guardrails.py
from nemoguardrails.rails import LLMRails, RailsConfig
def demo():
# In practice, a folder will be used with the config split across multiple files.
config = RailsConfig.from_path("./config")
rails = LLMRails(config)
# For chat
new_message = rails.generate(messages=[{
"role": "user",
"content": "Hello! What can you do for me?"
}])
print("RESPONSE:", new_message)
if __name__ == "__main__":
demo()
After running demo_guardrails.py
with the above configurations, I receive the following output:
Traceback (most recent call last):
File "/Users/efkan/anaconda3/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/Users/efkan/anaconda3/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/Users/efkan/.vscode/extensions/ms-python.python-2023.10.1/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py", line 39, in <module>
cli.main()
File "/Users/efkan/.vscode/extensions/ms-python.python-2023.10.1/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 430, in main
run()
File "/Users/efkan/.vscode/extensions/ms-python.python-2023.10.1/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 284, in run_file
runpy.run_path(target, run_name="__main__")
File "/Users/efkan/.vscode/extensions/ms-python.python-2023.10.1/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 321, in run_path
return _run_module_code(code, init_globals, run_name,
File "/Users/efkan/.vscode/extensions/ms-python.python-2023.10.1/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 135, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/Users/efkan/.vscode/extensions/ms-python.python-2023.10.1/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 124, in _run_code
exec(code, run_globals)
File "/Users/efkan/Desktop/repos/ibm-repos-private/demo_guardrails.py", line 17, in <module>
demo()
File "/Users/efkan/Desktop/repos/ibm-repos-private/demo_guardrails.py", line 6, in demo
rails = LLMRails(config)
File "/Users/efkan/anaconda3/lib/python3.10/site-packages/nemoguardrails/rails/llm/llmrails.py", line 79, in __init__
self._init_llm()
File "/Users/efkan/anaconda3/lib/python3.10/site-packages/nemoguardrails/rails/llm/llmrails.py", line 143, in _init_llm
self.llm = provider_cls(**kwargs)
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for GPT4All
__root__
Model.__init__() got an unexpected keyword argument 'n_parts' (type=type_error)
It seems that the code fails when initializing the GPT4All model. Please let me know if I've missed anything. Thanks!