Skip to content

Conversation

vk-playground
Copy link
Contributor

Issue #699 - Metrics Enhancements

Overview

This document outlines the changes made to address a subset of the requirements in issue #699, which involved enhancing the metrics functionality in MCP Gateway, and provides instructions for testing these enhancements.

Changes Implemented

1. Enhanced Metrics Calculation Functions

Updated the following functions in mcpgateway/utils/metrics_common.py:

  • calculate_success_rate(): Improved to handle edge cases such as:

    • Division by zero (when total is 0)
    • None inputs
    • Type conversion errors
  • format_response_time(): Enhanced to format response times with:

    • Consistent 3 decimal places (x.xxx)
    • Proper handling of None values
    • Consistent display of trailing zeros

2. Admin API Enhancements

Modified mcpgateway/admin.py to:

  • Export ALL rows in metrics CSV exports (previously limited to top 5)
  • Format response times consistently with 3 decimal places
  • Properly handle empty states (display "N/A" when no data exists)

3. Services Improvements

Updated metrics retrieval in service modules to support:

  • Optional limit parameter to retrieve any number of results
  • Unlimited results when exporting metrics data

How to Test

Follow these steps to verify the metrics enhancements:

Prerequisites

  1. Create and activate a Python virtual environment
  2. Install dependencies: pip install -e .

Testing Core Metrics Functions

  1. Run the unit tests for the metrics functions:

    python -m pytest tests/test_metrics_functions.py -v
    

    This will verify:

    • Success rate calculation handles edge cases (0/0, None inputs)
    • Response time formatting is consistent with 3 decimal places
    • Empty states are handled correctly

Testing with Manual Data

  1. Start the MCP Gateway server with admin features enabled:

    $env:MCPGATEWAY_ADMIN_API_ENABLED="true"
    $env:MCPGATEWAY_UI_ENABLED="true"
    python -m uvicorn mcpgateway.main:app --host 0.0.0.0 --port 8008 --reload
    
  2. Create test data using the provided script:

    python tests/setup_test_data.py
    

    This creates test tools with different metrics patterns:

    • Tool 1: 80% success rate (8 successes, 2 failures)
    • Tool 2: 100% success rate (5 successes, 0 failures)
    • Tool 3: 0% success rate (0 successes, 3 failures)
image
  1. Test the admin UI:

    • Access http://localhost:8008/admin (username: admin, password: changeme)
    • Navigate to the Metrics tab
    • Verify all test tools appear with correct success rates
    • Verify response times are shown with 3 decimal places
  2. Test CSV export:

image
  1. Test empty state:
    • Using SQLite, delete all metrics for one tool
    • Verify the UI and export handle empty state by showing "N/A"

Verification Criteria

The issue is considered resolved when:

  1. ✅ Success rate calculation handles all edge cases
  2. ✅ Response times are consistently formatted with 3 decimal places
  3. ✅ CSV exports include ALL rows, not just top 5
  4. ✅ Empty states are handled gracefully with "N/A"

Additional Notes

  • The metrics calculations happen in the metrics_common.py utility file
  • The admin API handles converting None values to "N/A" for display
  • The CSV export uses the same formatting logic as the UI display

Test Files Location

  • Core tests: tests/test_metrics_functions.py
  • Test data generation: tests/setup_test_data.py

Troubleshooting

If the server fails to start with database errors:

  1. Check for unique constraint errors in the database
  2. Update the test data generation script to use unique tool names
  3. Try clearing the existing database or using a new one

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant