Skip to content

johnayoung/llm-orchestrator

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🚀 LLM Orchestrator

A powerful Go CLI application that sends prompts to multiple Large Language Models (LLMs) in parallel and uses Claude 3.5 Sonnet as a Master Agent to analyze and synthesize the responses.

✨ Features

  • 🔄 Parallel Execution: Simultaneous queries to 4 leading LLM providers
  • 🤖 Master Agent Analysis: Claude 3.5 Sonnet analyzes all responses with intelligent ranking and synthesis
  • 🎯 Multiple Providers: Support for Anthropic Claude, OpenAI GPT-4, Google Gemini, and Groq Llama
  • 📊 Rich Output Formats: Beautiful text with emojis or structured JSON
  • 💾 File Operations: Read prompts from files, save responses individually
  • 🛡️ Robust Error Handling: Comprehensive error messages with helpful guidance
  • 📈 Progress Tracking: Real-time progress indicators and verbose logging
  • 🔧 Professional Build System: Cross-platform compilation with Makefile

🎯 Supported LLM Providers

Provider Model Purpose
Anthropic Claude 3.5 Sonnet Master Agent + Regular Response
OpenAI GPT-4 High-quality reasoning
Google Gemini Pro Google's flagship model
Groq Llama 3.1 70B Fast inference with Meta's model

🚀 Quick Start

Prerequisites

# Ensure you have Go 1.21+ installed
go version

Installation

# Clone and build
git clone <repository-url>
cd go-nutrition-advice
make build

# Or build for all platforms
make build-all

# Or install to system PATH
make install

Environment Setup

Create a .env file or export environment variables:

export ANTHROPIC_API_KEY="your-claude-api-key"
export OPENAI_API_KEY="your-openai-api-key" 
export GOOGLE_API_KEY="your-gemini-api-key"
export GROQ_API_KEY="your-groq-api-key"

Note: You need at least 2 API keys for the orchestrator to work.

💡 Usage

Basic Usage

# Simple prompt
./bin/llm-orchestrator -prompt "What are the key principles of effective nutrition?"

# From file
echo "Explain the Mediterranean diet" > prompt.txt
./bin/llm-orchestrator -file prompt.txt

# JSON output
./bin/llm-orchestrator -prompt "Compare different protein sources" -format json

# Save individual responses
./bin/llm-orchestrator -prompt "Nutrition for athletes" -save-responses

Advanced Usage

# Verbose logging (see provider interactions)
./bin/llm-orchestrator -prompt "Your question" -verbose

# Debug mode (detailed error traces)  
./bin/llm-orchestrator -prompt "Your question" -debug

# Save to file
./bin/llm-orchestrator -prompt "Your question" -output results.txt

Development Commands

make run PROMPT="Your question here"   # Build and run
make debug PROMPT="Your question"      # Debug mode
make clean                             # Clean build artifacts
make help                              # Show all targets

📊 Example Output

Text Format (Default)

🤖 LLM Orchestrator Results
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

🎯 Master Agent Analysis (Claude 3.5 Sonnet):
✨ The responses show strong consensus on core nutrition principles...

📊 Provider Rankings:
🥇 OpenAI: Comprehensive and well-structured
🥈 Claude: Evidence-based with practical tips  
🥉 Gemini: Good balance of theory and application
🔴 Groq: Unavailable (API error)

🎨 Key Themes:
• Balanced macronutrient intake
• Whole foods emphasis
• Individual customization needs

JSON Format

{
  "master_analysis": {
    "provider": "Claude",
    "analysis": "Comprehensive analysis...",
    "rankings": [...],
    "themes": [...]
  },
  "responses": {
    "OpenAI": { "success": true, "content": "..." },
    "Claude": { "success": true, "content": "..." }
  },
  "execution_summary": {
    "successful_providers": 2,
    "total_duration_ms": 1247
  }
}

🛠️ Architecture

cmd/main.go                 # CLI entry point with error handling
internal/
├── config/                 # Environment configuration
├── types/                  # Shared data structures  
├── providers/              # LLM provider implementations
├── orchestrator/           # Parallel execution engine
└── output/                 # Response formatting system

Key Components

  • 🎛️ Config Management: Environment-based API key validation
  • ⚡ Provider Interface: Unified API for all LLM providers
  • 🚀 Orchestrator: Parallel execution with goroutines and channels
  • 🎨 Output Formatter: Professional text and JSON formatting
  • 🛡️ Error Handling: Comprehensive error messages with guidance

🔧 Build System

The project includes a comprehensive Makefile with targets for:

Target Description
build Build for current platform
build-all Cross-compile for Linux, macOS, Windows
run Build and run with prompt
debug Run in debug mode
clean Remove build artifacts
install Install to system PATH
test Run unit tests
dev-setup Install development dependencies

� Error Handling

The application provides comprehensive error handling:

  • ❌ Missing API Keys: Clear guidance on required environment variables
  • 🔑 Invalid API Keys: Specific error messages per provider
  • 📁 File Errors: Helpful hints for file operations
  • 🌐 Network Issues: Timeout handling and retry suggestions
  • ⚠️ Insufficient Responses: Minimum 2 providers required

🧪 Testing

# Run all tests
make test

# Test error handling scenarios
./bin/llm-orchestrator -prompt "test" -debug

# Test with verbose logging
./bin/llm-orchestrator -prompt "test" -verbose

🤝 Contributing

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

📜 License

This project is licensed under the MIT License - see the LICENSE file for details.

🙏 Acknowledgments

  • Anthropic for Claude 3.5 Sonnet (Master Agent)
  • OpenAI for GPT-4
  • Google for Gemini Pro
  • Groq for fast Llama inference
  • Go Community for excellent tooling and libraries

Built with ❤️ and Go

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published