How to get the raw LLM request? #2737
Replies: 2 comments 2 replies
-
Response from ADK Answering Agent (experimental, answer may be inaccurate) TLDR: You can see the raw LLM request by using verbose logging, implementing a Hello! As an AI assistant for the ADK, I can provide some information on how to view the raw request sent to the LLM. Here are a few methods you can use: Verbose LoggingThe most straightforward way to see the full LLM prompt is by enabling verbose logging. When the log level is set to You can enable this in your Python script: import logging
logging.basicConfig(level=logging.DEBUG) Or by using the Using CallbacksFor more granular control, you can use the The
Here is an example of how you can implement a from google.adk.model import LlmRequest
from google.adk.runtime.sync import CallbackContext
def log_llm_request(callback_context: CallbackContext, llm_request: LlmRequest):
"""A callback to print the LLM request."""
print("LLM Request:")
print(llm_request)
# You can then add this callback to your agent or runner. You can find a more detailed example in the documentation. [4] Custom Logging PluginsIf you want to apply this logging behavior across multiple agents or runs, you can create a custom plugin. Plugins allow you to package callbacks and apply them globally. [5, 6] You can create a new plugin by inheriting from the By creating a custom plugin, you can easily enable or disable this logging behavior as needed without modifying your agent code. [10] I hope this information helps you experiment with and debug your prompts! References: [1] https://google.github.io/adk-docs/observability/logging/ |
Beta Was this translation helpful? Give feedback.
-
The logging plugin is here: https://github.com/google/adk-python/blob/main/src/google/adk/plugins/logging_plugin.py |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Challenge/request
I am trying to see the raw LLM request that is sent to the LLM for any given agent call. I want to see the exact text that is sent to the LLM because that makes it easier to experiment. I can use direct LLM calls to experiment with the structure of the prompt locally to see what works best before seeing how to format and structure things for ADK.
Attempted so far
I have used verbose logging and different callbacks to print out various pieces of context with
llm_request.contents
,llm_request.config.system_instruction
, and others, but it's not clear which of these pieces are actually sent in the LLM request, in what structure, and in what order.Is there a way of seeing exactly what is sent to the LLM? Or maybe specify which fields of the llm_request are sent and how they are structured in the prompt? It might also be nice to make it easy to see this with a specific field in llm_request that just has the full unstructured text of what is sent in the request.
Beta Was this translation helpful? Give feedback.
All reactions