I get the error "'text': 'dict' object cannot be converted to 'PyString' error" while composing an LCEL pipeline #25539
Replies: 6 comments 5 replies
-
To avoid the "'text': 'dict' object cannot be converted to 'PyString'" error, you should ensure that the from langchain_core.chat_history import InMemoryChatMessageHistory
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain_openai import ChatOpenAI
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a pirate. Answer the following questions as best you can."),
("placeholder", "{chat_history}"),
("human", "{input}"),
]
)
history = InMemoryChatMessageHistory()
def get_history():
return history
chain = prompt | ChatOpenAI() | StrOutputParser()
wrapped_chain = RunnableWithMessageHistory(
chain,
get_history,
history_messages_key="chat_history",
)
wrapped_chain.invoke({"input": "how are you?"}) In this setup, the Additionally, you can refer to the LangChain documentation for more detailed guidance on managing chat history in conversational applications. The documentation provides examples of how to use |
Beta Was this translation helpful? Give feedback.
-
I am having this same issue. It seems to only appear when using a retriever with multiple input parameters as runnable passthroughs. This chain DOES NOT work
But it does work if I remove the retriever. langchain = "0.3.0" |
Beta Was this translation helpful? Give feedback.
-
@dosu I stille get the same error |
Beta Was this translation helpful? Give feedback.
-
I'm experiencing the same error. Does anyone know how to fix it? |
Beta Was this translation helpful? Give feedback.
-
This should work: from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough, chain
model = None
retriever = None
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a chatbot. Answer the following questions based on the context."),
("human", "Based on this context:{context}\nRespond to this question: {question}"),
]
)
def format_docs(docs):
"""Format the retrieved documents into a single string."""
return "\n".join([doc.page_content for doc in docs])
@chain
def run_retriever(input_):
"""Run the retriever to get the relevant documents."""
docs = retriever.invoke(input_["context_question"])
return format_docs(docs)
rag_chain = RunnablePassthrough.assign(context=run_retriever) | prompt | model | StrOutputParser()
output = rag_chain.invoke({"context_question": "What is RAG?", "question": "What is the context about?"}) |
Beta Was this translation helpful? Give feedback.
-
TypeError: argument 'text': 'dict' object cannot be converted to 'PyString' answer: Solution: result = chain.invoke(user_question)
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Checked other resources
Commit to Help
Example Code
Description
I get that error (contained in the title) when trying to append the chat_history while calling the invoke method.
Am I wrongly composing the pipeline? Where should I put the chat_history?
System Info
System Information
Package Information
Beta Was this translation helpful? Give feedback.
All reactions