Automatically summarize and categorize conversations from email and Slack using OpenAI's GPT-4. This FastAPI service processes message batches and provides intelligent insights by extracting decisions, issues, and reminders.
β’ Intelligent Message Summarization - Powered by OpenAI GPT-4 β’ Automatic Categorization - Extracts decisions, issues, and reminders β’ PostgreSQL Database Storage - Integrated with Supabase for persistence β’ LangSmith Integration - Comprehensive LLM monitoring and tracing β’ Rate Limiting & Validation - Built-in protection against abuse β’ Comprehensive Logging - Request tracking and error monitoring β’ Connection Pooling - Reliable database connections β’ Production Ready - Error handling, validation, and monitoring
This API follows enterprise-grade architectural patterns used by major tech companies like Netflix, Uber, and Stripe:
βββββββββββββββββββ
β Routes/API β β FastAPI endpoints (app/routers/)
βββββββββββββββββββ€
β Services β β Business logic (app/services/)
βββββββββββββββββββ€
β Models β β Data validation (app/models/)
βββββββββββββββββββ€
β Middleware β β Cross-cutting concerns (app/middleware/)
βββββββββββββββββββ
β’ Dependency Injection - Clean service management and testing
β’ Service Layer Pattern - Business logic abstraction
β’ Repository Pattern - Database operations separation
β’ Middleware Architecture - Security, logging, and CORS
β’ Configuration Management - Environment-based settings
β’ Error Handling - Custom exceptions with structured responses
β’ API Versioning - /api/v1
prefix for future compatibility
β’ Maintainable - Easy to find and modify specific functionality β’ Testable - Each component can be unit tested independently β’ Scalable - Services can be distributed or extracted to microservices β’ Team-Friendly - Multiple developers can work on different layers β’ Production-Ready - Follows 12-Factor App principles
β’ Backend: FastAPI + Python 3.8+ β’ AI/ML: OpenAI GPT-4 via LangChain β’ Database: PostgreSQL (Supabase) β’ Monitoring: LangSmith for LLM operations β’ Deployment: Docker-ready with environment configuration
β’ Team Meeting Summaries - Automatically extract key points from discussions β’ Customer Support Analysis - Identify common issues and decisions β’ Project Status Tracking - Monitor progress through conversation analysis β’ Automated Report Generation - Create structured summaries from chat logs β’ Slack Channel Insights - Understand team communication patterns
β’ Python 3.8+ β’ OpenAI API key β’ Supabase PostgreSQL database β’ LangSmith API key (optional)
-
Clone the repository
git clone https://github.com/yourusername/email-slack-ai-automation-api.git cd email-slack-ai-automation-api
-
Create virtual environment
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies
pip install -r requirements.txt
-
Set up environment variables
cp .env.example .env # Edit .env with your actual values
-
Run the API
uvicorn main:app --reload
Create a .env
file with the following variables:
# OpenAI API Key
OPENAI_API_KEY=your_openai_api_key_here
# Supabase Database URL
SUPABASE_DB_URL=your_supabase_database_url_here
# LangSmith Configuration (Optional)
LANGCHAIN_API_KEY=your_langsmith_api_key_here
LANGCHAIN_PROJECT=email-slack-automation
LANGCHAIN_TRACING_V2=true
LANGCHAIN_ENDPOINT=https://api.smith.langchain.com
Summarize a batch of messages and extract insights using AI.
Request Body (Single Object):
{
"source": "email",
"messages": [
{
"sender": "[email protected]",
"text": "Let's schedule a meeting for next week."
},
{
"sender": "[email protected]",
"text": "Great idea! How about Tuesday?"
}
]
}
Request Body (Array Format):
[
{
"source": "email",
"messages": [
{
"sender": "[email protected]",
"text": "Let's schedule a meeting for next week."
},
{
"sender": "[email protected]",
"text": "Great idea! How about Tuesday?"
}
]
}
]
Response:
{
"success": true,
"data": {
"summary": "Team discussed scheduling a meeting for next week, with Tuesday suggested as a potential date.",
"decisions": ["Schedule meeting for next week"],
"issues": [],
"reminders": ["Confirm meeting date"]
},
"meta": {
"saved": true,
"request_id": "uuid-here",
"processing_time": 2.34,
"message_count": 2
}
}
Comprehensive health check for all services.
Response:
{
"status": "healthy",
"timestamp": "2024-01-15T10:30:00Z",
"services": {
"api": true,
"llm": true,
"database": true
},
"error": null,
"database_error": null
}
Debug endpoint to check environment configuration.
Response:
{
"environment": {
"OPENAI_API_KEY": "set",
"SUPABASE_URL": "set",
"SUPABASE_ANON_KEY": "set"
},
"config": {
"max_requests_per_minute": 60,
"max_message_count": 100,
"max_text_length": 10000
}
}
Root endpoint for basic API status.
Response:
{
"status": "alive",
"service": "Email-Slack Automation API",
"version": "1.0.0",
"timestamp": "2024-01-15T10:30:00Z"
}
The API expects the following tables in your Supabase PostgreSQL database:
-- Messages table
CREATE TABLE messages (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
source VARCHAR(255) NOT NULL,
sender VARCHAR(255) NOT NULL,
content TEXT NOT NULL,
created_at TIMESTAMP DEFAULT NOW()
);
-- Summaries table
CREATE TABLE summaries (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
message_ids UUID[] NOT NULL,
summary TEXT NOT NULL,
categories JSONB NOT NULL,
created_at TIMESTAMP DEFAULT NOW()
);
β’ Rate Limiting: 60 requests per minute per client β’ Input Validation: Maximum 100 messages per request, 10,000 characters per message β’ Database Security: Connection pooling with automatic cleanup β’ Error Handling: Comprehensive logging without exposing sensitive information
β’ Request Tracking: Unique request IDs for all operations β’ Performance Metrics: LLM processing time tracking β’ LangSmith Integration: Full LLM operation tracing and feedback β’ Structured Logging: JSON-formatted logs with request context β’ Health Checks: Database connectivity monitoring
β’ Database: Connection pooling for optimal performance β’ LLM: Async processing with timeout handling β’ Caching: Built-in rate limiting with memory storage β’ Monitoring: Real-time performance tracking
This project is licensed under the MIT License - see the LICENSE file for details.
β’ OpenAI for GPT-4 API access β’ LangChain for LLM orchestration β’ FastAPI for the web framework β’ Supabase for database hosting