This repository contains demos and use cases for Maestro, a tool for managing and running AI agents and workflows. This repository was originally part of the demos directory in the main Maestro project and has been extracted as a standalone repository for easier access and contribution.
This repository contains:
- Various workflow demonstrations showcasing Maestro's capabilities
- Agent configurations and examples
- Complete working examples for different use cases
- Documentation and setup guides for each demo
To use these demos, you'll need to install Maestro first:
pip install git+https://github.com/AI4quantum/[email protected]Note: If using scoring or crewai agents, install:
pip install "maestro[crewai] @ git+https://github.com/AI4quantum/[email protected]"- Clone this repository:
git clone https://github.com/your-username/maestro-demos.git
cd maestro-demos-
Navigate to any demo directory and follow its specific README for setup instructions.
-
Run a workflow:
maestro run <workflow_path>- Create an agent:
maestro create <agent_path>- Validate a workflow or agent:
maestro validate <path>Browse the workflows/ and agents/ directories to explore various examples:
- Workflows: Complete workflow demonstrations
- Agents: Individual agent examples and configurations
- Use Cases: Real-world application examples
Each demo includes its own README with specific setup and usage instructions.
For a detailed guide on using Podman Desktop, docker-compose, and bee-stack see:
- Clone the repository:
git clone https://github.com/your-username/maestro-demos.git
cd maestro-demos- Install development dependencies:
uv sync --all-extras- Run tests:
uv run pytest- Run the formatter:
uv run ruff format- Run the linter:
uv run ruff check --fixBy default, the .env file and api runs on llama version 3.1. Download ollama: https://ollama.com/ and navigate to llama3.1 model: https://ollama.com/library/llama3.1.
To use a different model, use ollama pull and choose from the official models. If using a different model, make sure to define in the agents.yaml file correctly.
For MCP tools, certain models support tools while others do not. Models that current support tooling that are tested include llama3.1:8b, llama3.3-70b-instruct and qwen3:8b.
The .env file should look like this:
OPENAI_API_BASE=http://localhost:11434/v1
OPENAI_API_KEY=ollamaPlease set SLACK_BOT_TOKEN and SLACK_TEAM_ID as environment variables. See ./tests/yamls/agents/slack_agent.yaml and ./tests/yamls/workflow_agent.yaml for details. The output of slack message will be whatever is passed into the prompt.
The Metrics Agent integrates Opik's LLM as a judge metrics into our workflows. Automatically route spec.model in the agent definition and add to workflow to automatically evaluate using AnswerRelevance and Hallucination scores. Can also additionally add context if the user knows the correct response or format a response should take.
See ./tests/yamls/agents/metrics_agent.py and ./tests/yamls/workflows/metrics_agents.py for more details.
If using the metrics, we need to add a opik evaluation key, which can be obtained here. Then you can use the dashboard to track metrics generated inside a workflow.
COMET_API_KEY=placeholderPlease read CONTRIBUTING.md for details on our code of conduct and the process for submitting pull requests.
This project is licensed under the Apache License - see the LICENSE file for details.