-
Notifications
You must be signed in to change notification settings - Fork 248
Description
🧭 Epic
Title: Dynamic Virtual Server API (/dynamic
)
Goal: Allow users to define and retrieve virtual MCP servers dynamically, using rules and optionally LLMs, instead of fixed entity IDs.
Why now: MCP Gateway supports static servers composed of explicitly-associated tools, resources, and prompts. But real-world use cases often demand flexible, dynamic groupings—based on tags, names, metadata, or higher-order logic (Embedding Search, Graph, or LLM). A new /dynamic
API unlocks rule-driven and AI-driven server construction.
🧭 Type of Feature
- Enhancement to existing functionality
- New feature or capability
- Security Related (requires review)
🙋♂️ User Story 1
As a: gateway operator
I want: to define a virtual server that includes tools tagged "analytics"
So that: new analytics tools are automatically included
✅ Acceptance Criteria
Scenario: Rule matches tools by metadata
Given a dynamic rule: type = "tool", filter = "$.tags[?(@ == 'analytics')]"
When a tool is registered with tag = "analytics"
Then it appears under GET /dynamic/<server_id>/tools
🙋♂️ User Story 2
As a: admin
I want: to group all prompts with name starting legal_
and not marked deprecated
So that: my dynamic server reflects compliance-related prompts only
✅ Acceptance Criteria
Scenario: Prompt filter with regex and status
Given a rule: type = "prompt", filter = "name =~ '^legal_' && !deprecated"
When matching prompts exist
Then they are included in GET /dynamic/<server_id>/prompts
🙋♂️ User Story 3
As a: power user
I want: to preview which tools a dynamic server would expose
So that: I can validate rule correctness before activation
✅ Acceptance Criteria
Scenario: Preview mode
Given I POST to /dynamic/preview with a list of rules
Then I get a simulated catalog response with matching tools/prompts/resources
🙋♂️ User Story 4
As a: LLM builder
I want: to defer rule resolution to an LLM function
So that: server content can reflect dynamic business logic
✅ Acceptance Criteria
Scenario: LLM-powered filter
Given a dynamic rule: type = "tool", mode = "llm", prompt = "Select tools related to finance or risk"
When the LLM returns a set of names
Then those tools appear in the dynamic server's catalog
🙋♂️ User Story 5
As a: API consumer
I want: to fetch /dynamic/<id>/tools
or /resources
or /prompts
So that: I can treat a dynamic server like any other catalog surface
🙋♂️ User Story 6: Taggable tools (Dependency – Required for Dynamic Servers)
As a: gateway admin
I want: to assign and query tags on tools (e.g. ["finance", "internal"]
)
So that: dynamic server rules can filter based on tag values
✅ Acceptance Criteria
Scenario: Tagging a tool
Given I PATCH /tools/{id} with {"tags": ["finance", "internal"]}
Then the tool includes a `tags` field
And GET /tools supports ?tag=... filter
Scenario: Filtering in dynamic server rule
Given a rule: type = "tool", filter = "$.tags[?(@ == 'finance')]"
Then only tools with tag "finance" are matched
📐 Design Sketch
📁 New API group: /dynamic
Endpoint | Description |
---|---|
GET /dynamic |
List defined dynamic servers |
POST /dynamic |
Create new dynamic server |
GET /dynamic/{id} |
Inspect rule definitions |
POST /dynamic/{id}/preview |
Dry-run the rule logic |
GET /dynamic/{id}/tools |
Live-evaluated tool catalog |
GET /dynamic/{id}/prompts |
Live-evaluated prompt catalog |
GET /dynamic/{id}/resources |
Live-evaluated resource catalog |
🧩 Rule Definition Schema
{
"rules": [
{
"type": "tool" | "resource" | "prompt",
"mode": "rule" | "llm",
"filter": "jsonpath or jq or python-expression", // if mode = "rule"
"llm_prompt": "string", // if mode = "llm"
"llm_provider": "openai" | "anthropic" | "local", // optional
"include_federated": true // optional
}
],
"name": "RiskOps Dynamic Server",
"description": "Everything related to legal, risk, compliance"
}
🔄 Evaluation Modes
mode: rule
: uses JMESPath/JSONPath or simplified JQ expressionsmode: llm
: invokes system-wide LLM to resolve selection logicinclude_federated
: allows federated tools/prompts/resources to be matched
📦 Runtime
- Catalogs are evaluated at request time (lazy), or pre-cached with TTL if configured
- LLM results can be cached per
llm_prompt
to avoid repeated calls
🔄 Alternatives Considered
- Requiring manual updates to static
server.associated_*
lists → burdensome and error-prone - Storing dynamic results as static data → risks stale catalogs
- External logic layer → less discoverable and harder to reason about
📓 Additional Context
- LLM integration should reuse existing
ClientSession
auth and timeouts - Dynamic server rules should be stored in the DB (new
dynamic_servers
table) - Dry-run support (preview) is essential to safe usage
- Must sandbox any
filter
expressions (e.g. viajq
orjsonpath_ng
)
🧭 Tasks
Area | Task | Notes |
---|---|---|
Schema | [ ] Add DynamicRule Pydantic model |
Support rule and LLM modes |
[ ] Add DynamicServerCreate , DynamicServerRead , etc. |
||
API | [ ] Add CRUD endpoints under /dynamic |
Use FastAPI |
[ ] Add preview and catalog expansion endpoints | ||
Logic | [ ] Implement rule evaluator using jsonpath_ng , fallback jq |
Configurable |
[ ] Add LLM integration via tool call or streaming | ||
Security | [ ] Validate expression parsing, reject unsafe evals | |
UI | [ ] Extend Admin UI to support rule-based dynamic servers | Preview + builder interface |
Tests | [ ] Unit: rule evaluation, LLM mock evaluation | |
[ ] Integration: dynamic server behaves like static catalog | ||
Docs | [ ] Add dynamic-servers.md with examples |
|
Metrics | [ ] Emit Prometheus events for evals, LLM hits |
🔗 MCP Standards Check
- Does not affect wire-level MCP spec
- No backward-incompatibility for existing APIs
- Optional—defaults to static-only mode if unused