| 
 | 1 | +# PloyLingua  | 
 | 2 | + | 
 | 3 | +A production-ready translation service built with **OPEA (Open Platform for Enterprise AI)** components, featuring a modern Next.js UI and microservices architecture.  | 
 | 4 | + | 
 | 5 | +## 🏗️ Architecture  | 
 | 6 | + | 
 | 7 | +This service implements a **5-layer microservices architecture**:  | 
 | 8 | + | 
 | 9 | +```  | 
 | 10 | +┌─────────────────────────────────────────────────────────────┐  | 
 | 11 | +│                    Nginx Reverse Proxy                       │  | 
 | 12 | +│                        (Port 80)                             │  | 
 | 13 | +└────────────────┬────────────────────────────────────────────┘  | 
 | 14 | +                 │  | 
 | 15 | +        ┌────────┴─────────┐  | 
 | 16 | +        │                  │  | 
 | 17 | +┌───────▼────────┐  ┌──────▼──────────────────┐  | 
 | 18 | +│   Next.js UI   │  │  Translation Megaservice │  | 
 | 19 | +│   (Port 5173)  │  │     (Port 8888)          │  | 
 | 20 | +└────────────────┘  └──────┬──────────────────┘  | 
 | 21 | +                           │  | 
 | 22 | +                  ┌────────▼────────────┐  | 
 | 23 | +                  │  LLM Microservice   │  | 
 | 24 | +                  │    (Port 9000)      │  | 
 | 25 | +                  └────────┬────────────┘  | 
 | 26 | +                           │  | 
 | 27 | +                  ┌────────▼────────────┐  | 
 | 28 | +                  │   TGI Model Server  │  | 
 | 29 | +                  │    (Port 8008)      │  | 
 | 30 | +                  └─────────────────────┘  | 
 | 31 | +```  | 
 | 32 | + | 
 | 33 | +### Components  | 
 | 34 | + | 
 | 35 | +1. **TGI Service** - HuggingFace Text Generation Inference for model serving  | 
 | 36 | +2. **LLM Microservice** - OPEA wrapper providing standardized API  | 
 | 37 | +3. **Translation Megaservice** - Orchestrator that formats prompts and routes requests  | 
 | 38 | +4. **UI Service** - Next.js 14 frontend with React and TypeScript  | 
 | 39 | +5. **Nginx** - Reverse proxy for unified access  | 
 | 40 | + | 
 | 41 | +## 🚀 Quick Start  | 
 | 42 | + | 
 | 43 | +### Prerequisites  | 
 | 44 | + | 
 | 45 | +- Docker and Docker Compose  | 
 | 46 | +- Git  | 
 | 47 | +- HuggingFace Account (for model access)  | 
 | 48 | +- 8GB+ RAM recommended  | 
 | 49 | +- ~10GB disk space for models  | 
 | 50 | + | 
 | 51 | +### 1. Clone and Setup  | 
 | 52 | + | 
 | 53 | +```bash  | 
 | 54 | +cd PolyLingua  | 
 | 55 | + | 
 | 56 | +# Configure environment variables  | 
 | 57 | +./set_env.sh  | 
 | 58 | +```  | 
 | 59 | + | 
 | 60 | +You'll be prompted for:  | 
 | 61 | +- **HuggingFace API Token** - Get from https://huggingface.co/settings/tokens  | 
 | 62 | +- **Model ID** - Default: `haoranxu/ALMA-13B` (translation-optimized model)  | 
 | 63 | +- **Host IP** - Your server's IP address  | 
 | 64 | +- **Ports and proxy settings**  | 
 | 65 | + | 
 | 66 | +### 2. Build Images  | 
 | 67 | + | 
 | 68 | +```bash  | 
 | 69 | +./deploy/build.sh  | 
 | 70 | +```  | 
 | 71 | + | 
 | 72 | +This builds:  | 
 | 73 | +- Translation backend service  | 
 | 74 | +- Next.js UI service  | 
 | 75 | + | 
 | 76 | +### 3. Start Services  | 
 | 77 | + | 
 | 78 | +```bash  | 
 | 79 | +./deploy/start.sh  | 
 | 80 | +```  | 
 | 81 | + | 
 | 82 | +Wait for services to initialize (~2-5 minutes for first run as models download).  | 
 | 83 | + | 
 | 84 | +### 4. Access the Application  | 
 | 85 | + | 
 | 86 | +- **Web UI**: http://localhost:80  | 
 | 87 | +- **API Endpoint**: http://localhost:8888/v1/translation  | 
 | 88 | + | 
 | 89 | +### 5. Test the Service  | 
 | 90 | + | 
 | 91 | +```bash  | 
 | 92 | +./deploy/test.sh  | 
 | 93 | +```  | 
 | 94 | + | 
 | 95 | +Or test manually:  | 
 | 96 | + | 
 | 97 | +```bash  | 
 | 98 | +curl -X POST http://localhost:8888/v1/translation \  | 
 | 99 | +  -H "Content-Type: application/json" \  | 
 | 100 | +  -d '{  | 
 | 101 | +    "language_from": "English",  | 
 | 102 | +    "language_to": "Spanish",  | 
 | 103 | +    "source_language": "Hello, how are you today?"  | 
 | 104 | +  }'  | 
 | 105 | +```  | 
 | 106 | + | 
 | 107 | +## 📋 Configuration  | 
 | 108 | + | 
 | 109 | +### Environment Variables  | 
 | 110 | + | 
 | 111 | +Key variables in `.env`:  | 
 | 112 | + | 
 | 113 | +| Variable | Description | Default |  | 
 | 114 | +|----------|-------------|---------|  | 
 | 115 | +| `HF_TOKEN` | HuggingFace API token | Required |  | 
 | 116 | +| `LLM_MODEL_ID` | Model to use for translation | `haoranxu/ALMA-13B` |  | 
 | 117 | +| `MODEL_CACHE` | Directory for model storage | `./data` |  | 
 | 118 | +| `host_ip` | Server IP address | `localhost` |  | 
 | 119 | +| `NGINX_PORT` | External port for web access | `80` |  | 
 | 120 | + | 
 | 121 | +See `.env.example` for full configuration options.  | 
 | 122 | + | 
 | 123 | +### Supported Models  | 
 | 124 | + | 
 | 125 | +The service works with any HuggingFace text generation model. Recommended models:  | 
 | 126 | + | 
 | 127 | +- **swiss-ai/Apertus-8B-Instruct-2509** - Multilingual translation (default)  | 
 | 128 | +- **haoranxu/ALMA-7B** - Specialized translation model  | 
 | 129 | + | 
 | 130 | + | 
 | 131 | +## 🛠️ Development  | 
 | 132 | + | 
 | 133 | +### Project Structure  | 
 | 134 | + | 
 | 135 | +```  | 
 | 136 | +opea-translation/  | 
 | 137 | +├── translation.py          # Backend translation service  | 
 | 138 | +├── requirements.txt        # Python dependencies  | 
 | 139 | +├── Dockerfile             # Backend container definition  | 
 | 140 | +├── docker-compose.yaml    # Multi-service orchestration  | 
 | 141 | +├── set_env.sh            # Environment setup script  | 
 | 142 | +├── .env.example          # Environment template  | 
 | 143 | +├── ui/                   # Next.js frontend  | 
 | 144 | +│   ├── app/             # Next.js app directory  | 
 | 145 | +│   ├── components/      # React components  | 
 | 146 | +│   ├── Dockerfile       # UI container definition  | 
 | 147 | +│   └── package.json     # Node dependencies  | 
 | 148 | +└── deploy/              # Deployment scripts  | 
 | 149 | +    ├── nginx.conf       # Nginx configuration  | 
 | 150 | +    ├── build.sh         # Image build script  | 
 | 151 | +    ├── start.sh         # Service startup script  | 
 | 152 | +    ├── stop.sh          # Service shutdown script  | 
 | 153 | +    └── test.sh          # API testing script  | 
 | 154 | +```  | 
 | 155 | + | 
 | 156 | +### Running Locally (Development)  | 
 | 157 | + | 
 | 158 | +**Backend:**  | 
 | 159 | +```bash  | 
 | 160 | +# Install dependencies  | 
 | 161 | +pip install -r requirements.txt  | 
 | 162 | + | 
 | 163 | +# Set environment variables  | 
 | 164 | +export LLM_SERVICE_HOST_IP=localhost  | 
 | 165 | +export LLM_SERVICE_PORT=9000  | 
 | 166 | +export MEGA_SERVICE_PORT=8888  | 
 | 167 | + | 
 | 168 | +# Run service  | 
 | 169 | +python translation.py  | 
 | 170 | +```  | 
 | 171 | + | 
 | 172 | +**Frontend:**  | 
 | 173 | +```bash  | 
 | 174 | +cd ui  | 
 | 175 | +npm install  | 
 | 176 | +npm run dev  | 
 | 177 | +```  | 
 | 178 | + | 
 | 179 | +### API Reference  | 
 | 180 | + | 
 | 181 | +#### POST /v1/translation  | 
 | 182 | + | 
 | 183 | +Translate text between languages.  | 
 | 184 | + | 
 | 185 | +**Request:**  | 
 | 186 | +```json  | 
 | 187 | +{  | 
 | 188 | +  "language_from": "English",  | 
 | 189 | +  "language_to": "Spanish",  | 
 | 190 | +  "source_language": "Your text to translate"  | 
 | 191 | +}  | 
 | 192 | +```  | 
 | 193 | + | 
 | 194 | +**Response:**  | 
 | 195 | +```json  | 
 | 196 | +{  | 
 | 197 | +  "model": "translation",  | 
 | 198 | +  "choices": [{  | 
 | 199 | +    "index": 0,  | 
 | 200 | +    "message": {  | 
 | 201 | +      "role": "assistant",  | 
 | 202 | +      "content": "Translated text here"  | 
 | 203 | +    },  | 
 | 204 | +    "finish_reason": "stop"  | 
 | 205 | +  }],  | 
 | 206 | +  "usage": {}  | 
 | 207 | +}  | 
 | 208 | +```  | 
 | 209 | + | 
 | 210 | +## 🔧 Operations  | 
 | 211 | + | 
 | 212 | +### View Logs  | 
 | 213 | + | 
 | 214 | +```bash  | 
 | 215 | +# All services  | 
 | 216 | +docker compose logs -f  | 
 | 217 | + | 
 | 218 | +# Specific service  | 
 | 219 | +docker compose logs -f translation-backend-server  | 
 | 220 | +docker compose logs -f translation-ui-server  | 
 | 221 | +```  | 
 | 222 | + | 
 | 223 | +### Stop Services  | 
 | 224 | + | 
 | 225 | +```bash  | 
 | 226 | +./deploy/stop.sh  | 
 | 227 | +```  | 
 | 228 | + | 
 | 229 | +### Update Services  | 
 | 230 | + | 
 | 231 | +```bash  | 
 | 232 | +# Rebuild images  | 
 | 233 | +./deploy/build.sh  | 
 | 234 | + | 
 | 235 | +# Restart services  | 
 | 236 | +docker compose down  | 
 | 237 | +./deploy/start.sh  | 
 | 238 | +```  | 
 | 239 | + | 
 | 240 | +### Clean Up  | 
 | 241 | + | 
 | 242 | +```bash  | 
 | 243 | +# Stop and remove containers  | 
 | 244 | +docker compose down  | 
 | 245 | + | 
 | 246 | +# Remove volumes (including model cache)  | 
 | 247 | +docker compose down -v  | 
 | 248 | +```  | 
 | 249 | + | 
 | 250 | +## 🐛 Troubleshooting  | 
 | 251 | + | 
 | 252 | +### Service won't start  | 
 | 253 | + | 
 | 254 | +1. Check if ports are available:  | 
 | 255 | +   ```bash  | 
 | 256 | +   sudo lsof -i :80,8888,9000,8008,5173  | 
 | 257 | +   ```  | 
 | 258 | + | 
 | 259 | +2. Verify environment variables:  | 
 | 260 | +   ```bash  | 
 | 261 | +   cat .env  | 
 | 262 | +   ```  | 
 | 263 | + | 
 | 264 | +3. Check service health:  | 
 | 265 | +   ```bash  | 
 | 266 | +   docker compose ps  | 
 | 267 | +   docker compose logs  | 
 | 268 | +   ```  | 
 | 269 | + | 
 | 270 | +### Model download fails  | 
 | 271 | + | 
 | 272 | +- Ensure `HF_TOKEN` is set correctly  | 
 | 273 | +- Check internet connection  | 
 | 274 | +- Verify model ID exists on HuggingFace  | 
 | 275 | +- Check disk space in `MODEL_CACHE` directory  | 
 | 276 | + | 
 | 277 | +### Translation errors  | 
 | 278 | + | 
 | 279 | +- Wait for TGI service to fully initialize (check logs)  | 
 | 280 | +- Verify LLM service is healthy: `curl http://localhost:9000/v1/health`  | 
 | 281 | +- Check TGI service: `curl http://localhost:8008/health`  | 
 | 282 | + | 
 | 283 | +### UI can't connect to backend  | 
 | 284 | + | 
 | 285 | +- Verify `BACKEND_SERVICE_ENDPOINT` in `.env`  | 
 | 286 | +- Check if backend is running: `docker compose ps`  | 
 | 287 | +- Test API directly: `curl http://localhost:8888/v1/translation`  | 
 | 288 | + | 
 | 289 | + | 
 | 290 | + | 
 | 291 | +## 🔗 Resources  | 
 | 292 | + | 
 | 293 | +- [OPEA Project](https://github.com/opea-project)  | 
 | 294 | +- [GenAIComps](https://github.com/opea-project/GenAIComps)  | 
 | 295 | +- [GenAIExamples](https://github.com/opea-project/GenAIExamples)  | 
 | 296 | +- [HuggingFace Text Generation Inference](https://github.com/huggingface/text-generation-inference)  | 
 | 297 | + | 
 | 298 | +## 📧 Support  | 
 | 299 | + | 
 | 300 | +For issues and questions:  | 
 | 301 | +- Open an issue on GitHub  | 
 | 302 | +- Check existing issues for solutions  | 
 | 303 | +- Review OPEA documentation  | 
 | 304 | + | 
 | 305 | +---  | 
 | 306 | + | 
 | 307 | +**Built with OPEA - Open Platform for Enterprise AI** 🚀  | 
0 commit comments