-
Notifications
You must be signed in to change notification settings - Fork 550
feat(cache): add caching support for topic safety and content safety output checks #1457
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This was referenced Oct 17, 2025
Codecov Report✅ All modified and coverable lines are covered by tests. 📢 Thoughts on this report? Let us know! |
b05cac4
to
e725d77
Compare
5c3d2bb
to
fab35c0
Compare
tgasser-nv
approved these changes
Oct 17, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, the implementation is clean and easy-to-understand. I have a couple of higher-level concerns about concurrent use of the cache (cc @hazai ):
- Is communicating using global ContextVars thread-safe if we have multiple get() and set() operations running concurently?
- Do we need to lock the cache (preventing reads) when making get() calls to the cache?
- How did we prevent race conditions by-design (since these are impossible to test-out)?
tgasser-nv
reviewed
Oct 17, 2025
tgasser-nv
reviewed
Oct 17, 2025
e725d77
to
fd873b7
Compare
Extends the cache system to store and restore LLM metadata (model name and provider name) alongside cache entries. This allows cached results to maintain provenance information about which model and provider generated the original response. - Added LLMMetadataDict and LLMCacheData TypedDict definitions for type safety - Extended CacheEntry to include optional llm_metadata field - Implemented extract_llm_metadata_for_cache() to capture model and provider info from context - Implemented restore_llm_metadata_from_cache() to restore metadata when retrieving cached results - Updated get_from_cache_and_restore_stats() to handle metadata extraction and restoration - Added comprehensive test coverage for metadata caching functionalit
…output checks Extends the LLM caching system to support topic safety input checks and content safety output checks. Both actions now cache their results along with LLM stats and metadata to improve performance on repeated queries. Changes - Added caching support to topic_safety_check_input() with cache hit/miss logic - Added caching support to content_safety_check_output() with cache hit/miss logic - Both actions now extract and store LLM metadata alongside stats in cache entries - Added model_caches parameter to both actions for optional cache injection - Comprehensive test coverage for both new caching implementations - Tests verify cache hits, stats restoration, and metadata handling pre-commits
fab35c0
to
ef6444d
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Extends the LLM caching system to support topic safety input checks and content safety output checks. Both actions now cache their results along with LLM stats and metadata to improve performance on repeated queries.
Changes
topic_safety_check_input()
with cache hit/miss logiccontent_safety_check_output()
with cache hit/miss logicmodel_caches
parameter to both actions for optional cache injectionDependencies
Part of Stack
This is PR 3/5 in the NeMoGuards caching feature stack.