Skip to content

Conversation

Pouyanpi
Copy link
Collaborator

@Pouyanpi Pouyanpi commented Oct 17, 2025

Extends the LLM caching system to support topic safety input checks and content safety output checks. Both actions now cache their results along with LLM stats and metadata to improve performance on repeated queries.

Changes

  • Added caching support to topic_safety_check_input() with cache hit/miss logic
  • Added caching support to content_safety_check_output() with cache hit/miss logic
  • Both actions now extract and store LLM metadata alongside stats in cache entries
  • Added model_caches parameter to both actions for optional cache injection
  • Comprehensive test coverage for both new caching implementations
  • Tests verify cache hits, stats restoration, and metadata handling

Dependencies

Part of Stack

This is PR 3/5 in the NeMoGuards caching feature stack.

@codecov-commenter
Copy link

Codecov Report

✅ All modified and coverable lines are covered by tests.

📢 Thoughts on this report? Let us know!

Copy link
Collaborator

@tgasser-nv tgasser-nv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, the implementation is clean and easy-to-understand. I have a couple of higher-level concerns about concurrent use of the cache (cc @hazai ):

  1. Is communicating using global ContextVars thread-safe if we have multiple get() and set() operations running concurently?
  2. Do we need to lock the cache (preventing reads) when making get() calls to the cache?
  3. How did we prevent race conditions by-design (since these are impossible to test-out)?

@Pouyanpi Pouyanpi force-pushed the feat/cache-llm-metadata branch from e725d77 to fd873b7 Compare October 19, 2025 10:00
Base automatically changed from feat/cache-llm-metadata to develop October 19, 2025 10:08
Extends the cache system to store and restore LLM metadata (model name
and provider name) alongside cache entries. This allows cached results
to maintain provenance information about which model and provider
generated the original response.

- Added LLMMetadataDict and LLMCacheData TypedDict definitions for type
safety
  - Extended CacheEntry to include optional llm_metadata field
  - Implemented extract_llm_metadata_for_cache() to capture model and
provider info from context
  - Implemented restore_llm_metadata_from_cache() to restore metadata
when retrieving cached results
  - Updated get_from_cache_and_restore_stats() to handle metadata
extraction and restoration
  - Added comprehensive test coverage for metadata caching functionalit
…output checks

Extends the LLM caching system to support topic safety input checks and
content safety output checks. Both actions now cache their results along
with LLM stats and metadata to improve performance on repeated queries.

  Changes

  - Added caching support to topic_safety_check_input() with cache
hit/miss logic
  - Added caching support to content_safety_check_output() with cache
hit/miss logic
  - Both actions now extract and store LLM metadata alongside stats in
cache entries
  - Added model_caches parameter to both actions for optional cache
injection
  - Comprehensive test coverage for both new caching implementations
  - Tests verify cache hits, stats restoration, and metadata handling

pre-commits
@Pouyanpi Pouyanpi force-pushed the feat/cache-safety-checks branch from fab35c0 to ef6444d Compare October 19, 2025 10:10
@Pouyanpi Pouyanpi merged commit b458254 into develop Oct 19, 2025
7 checks passed
@Pouyanpi Pouyanpi deleted the feat/cache-safety-checks branch October 19, 2025 10:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants