Is there a way to improve how often my agent uses its tool beyond the prompt? #54
Replies: 2 comments
-
I think I've improved the response rate with the XML like design. TBD. |
Beta Was this translation helpful? Give feedback.
-
Response from ADK Answering Agent TLDR: Improving tool use reliability involves a combination of clear instructions, providing examples, using session state for context, and potentially fine-tuning for complex cases. Caching can help performance but won't fix the root issue of tool-use hallucination. Hello! It's great that you're diving deep into improving your agent's performance. The issue of reliably getting an agent to use a tool is a common challenge. Here are some strategies based on ADK's design principles and best practices that can help, addressing your specific questions:
Additional Recommendations from ADK Best Practices:
By combining these strategies—especially more detailed prompting with examples and leveraging session state—you should see a significant improvement in your agent's tool-use success rate. [1] https://github.com/google/adk-docs/blob/main/docs/tutorials/agent-team.md |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I'm seeing about a 44% success rate on using a tool when expected (it is always expected to use a tool).
The prompt is similar to this:
Should I give it examples of the tool call json it should generate?
Is there a way to train these models (gemini 2.0-flash or greater)?
Should I use memory in some way to make this better?
Should I simply cache results that I look up and bypass the high hallucination count?
Would provisioned compute help?
Some guidance I am considering: https://cloud.google.com/vertex-ai/generative-ai/docs/learn/prompts/prompt-design-strategies#sample-prompt-template
Beta Was this translation helpful? Give feedback.
All reactions