⚡️ Speed up function _get_mode_intro_message by 31%
          #395
        
          
      
  Add this suggestion to a batch that can be applied as a single commit.
  This suggestion is invalid because no changes were made to the code.
  Suggestions cannot be applied while the pull request is closed.
  Suggestions cannot be applied while viewing a subset of changes.
  Only one suggestion per line can be applied in a batch.
  Add this suggestion to a batch that can be applied as a single commit.
  Applying suggestions on deleted lines is not supported.
  You must change the existing code in this line in order to create a valid suggestion.
  Outdated suggestions cannot be applied.
  This suggestion has been applied or marked resolved.
  Suggestions cannot be applied from pending reviews.
  Suggestions cannot be applied on multi-line comments.
  Suggestions cannot be applied while the pull request is queued to merge.
  Suggestion cannot be applied right now. Please check back later.
  
    
  
    
📄 31% (0.31x) speedup for
_get_mode_intro_messageinmarimo/_server/ai/prompts.py⏱️ Runtime :
167 microseconds→127 microseconds(best of84runs)📝 Explanation and details
The optimization replaces runtime string concatenation with pre-constructed module-level constants. In the original code, each function call performed string formatting (
f"{base_intro}") and concatenated multiple string literals at runtime. The optimized version eliminates this overhead by pre-building the complete intro messages as module constants_MANUAL_INTROand_ASK_INTRO.Key changes:
base_introvariable and f-string formatting operationsWhy this is faster:
String concatenation and formatting in Python involves memory allocation and copying operations at runtime. By moving this work to module import time (which happens once), each function call now only performs a simple constant lookup and return, eliminating the repeated string operations.
Performance characteristics:
The optimization shows consistent 30-80% speedup across all test cases, with particularly strong gains on repeated calls (up to 79% faster). This makes it especially beneficial for high-frequency usage patterns where the same mode is requested multiple times, as evidenced by the large batch tests showing 30-31% improvements even across 500 calls.
✅ Correctness verification report:
🌀 Generated Regression Tests and Runtime
To edit these changes
git checkout codeflash/optimize-_get_mode_intro_message-mh5k75a1and push.