A powerful Go CLI application that sends prompts to multiple Large Language Models (LLMs) in parallel and uses Claude 3.5 Sonnet as a Master Agent to analyze and synthesize the responses.
- 🔄 Parallel Execution: Simultaneous queries to 4 leading LLM providers
- 🤖 Master Agent Analysis: Claude 3.5 Sonnet analyzes all responses with intelligent ranking and synthesis
- 🎯 Multiple Providers: Support for Anthropic Claude, OpenAI GPT-4, Google Gemini, and Groq Llama
- 📊 Rich Output Formats: Beautiful text with emojis or structured JSON
- 💾 File Operations: Read prompts from files, save responses individually
- 🛡️ Robust Error Handling: Comprehensive error messages with helpful guidance
- 📈 Progress Tracking: Real-time progress indicators and verbose logging
- 🔧 Professional Build System: Cross-platform compilation with Makefile
Provider | Model | Purpose |
---|---|---|
Anthropic | Claude 3.5 Sonnet | Master Agent + Regular Response |
OpenAI | GPT-4 | High-quality reasoning |
Gemini Pro | Google's flagship model | |
Groq | Llama 3.1 70B | Fast inference with Meta's model |
# Ensure you have Go 1.21+ installed
go version
# Clone and build
git clone <repository-url>
cd go-nutrition-advice
make build
# Or build for all platforms
make build-all
# Or install to system PATH
make install
Create a .env
file or export environment variables:
export ANTHROPIC_API_KEY="your-claude-api-key"
export OPENAI_API_KEY="your-openai-api-key"
export GOOGLE_API_KEY="your-gemini-api-key"
export GROQ_API_KEY="your-groq-api-key"
Note: You need at least 2 API keys for the orchestrator to work.
# Simple prompt
./bin/llm-orchestrator -prompt "What are the key principles of effective nutrition?"
# From file
echo "Explain the Mediterranean diet" > prompt.txt
./bin/llm-orchestrator -file prompt.txt
# JSON output
./bin/llm-orchestrator -prompt "Compare different protein sources" -format json
# Save individual responses
./bin/llm-orchestrator -prompt "Nutrition for athletes" -save-responses
# Verbose logging (see provider interactions)
./bin/llm-orchestrator -prompt "Your question" -verbose
# Debug mode (detailed error traces)
./bin/llm-orchestrator -prompt "Your question" -debug
# Save to file
./bin/llm-orchestrator -prompt "Your question" -output results.txt
make run PROMPT="Your question here" # Build and run
make debug PROMPT="Your question" # Debug mode
make clean # Clean build artifacts
make help # Show all targets
🤖 LLM Orchestrator Results
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🎯 Master Agent Analysis (Claude 3.5 Sonnet):
✨ The responses show strong consensus on core nutrition principles...
📊 Provider Rankings:
🥇 OpenAI: Comprehensive and well-structured
🥈 Claude: Evidence-based with practical tips
🥉 Gemini: Good balance of theory and application
🔴 Groq: Unavailable (API error)
🎨 Key Themes:
• Balanced macronutrient intake
• Whole foods emphasis
• Individual customization needs
{
"master_analysis": {
"provider": "Claude",
"analysis": "Comprehensive analysis...",
"rankings": [...],
"themes": [...]
},
"responses": {
"OpenAI": { "success": true, "content": "..." },
"Claude": { "success": true, "content": "..." }
},
"execution_summary": {
"successful_providers": 2,
"total_duration_ms": 1247
}
}
cmd/main.go # CLI entry point with error handling
internal/
├── config/ # Environment configuration
├── types/ # Shared data structures
├── providers/ # LLM provider implementations
├── orchestrator/ # Parallel execution engine
└── output/ # Response formatting system
- 🎛️ Config Management: Environment-based API key validation
- ⚡ Provider Interface: Unified API for all LLM providers
- 🚀 Orchestrator: Parallel execution with goroutines and channels
- 🎨 Output Formatter: Professional text and JSON formatting
- 🛡️ Error Handling: Comprehensive error messages with guidance
The project includes a comprehensive Makefile with targets for:
Target | Description |
---|---|
build |
Build for current platform |
build-all |
Cross-compile for Linux, macOS, Windows |
run |
Build and run with prompt |
debug |
Run in debug mode |
clean |
Remove build artifacts |
install |
Install to system PATH |
test |
Run unit tests |
dev-setup |
Install development dependencies |
The application provides comprehensive error handling:
- ❌ Missing API Keys: Clear guidance on required environment variables
- 🔑 Invalid API Keys: Specific error messages per provider
- 📁 File Errors: Helpful hints for file operations
- 🌐 Network Issues: Timeout handling and retry suggestions
⚠️ Insufficient Responses: Minimum 2 providers required
# Run all tests
make test
# Test error handling scenarios
./bin/llm-orchestrator -prompt "test" -debug
# Test with verbose logging
./bin/llm-orchestrator -prompt "test" -verbose
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Anthropic for Claude 3.5 Sonnet (Master Agent)
- OpenAI for GPT-4
- Google for Gemini Pro
- Groq for fast Llama inference
- Go Community for excellent tooling and libraries
Built with ❤️ and Go