The AI Prompt Optimizer is a powerful application designed to enhance and optimize textual prompts for AI image generation models. This tool helps users create more effective prompts that yield better results with models like SDXL, Stable Diffusion 1.5, Flux 1.0 dev, and HiDream.
-
Natural Language Input: Describe your scene in simple terms in any language
-
Automatic Translation: Enter in any language, receive optimized prompts in English (required by image models)
-
Support for Multiple Image Models: Optimization for SDXL, Stable Diffusion 1.5, Flux 1.0 dev, or HiDream
-
Flexible LLM Backend Options:
- Local processing with LM Studio
- Cloud processing with Google Gemini Flash 2.0 API
-
Advanced Controls:
- Adjust level of detail
- Control prompt length
- Visual style slider (raw to professional)
-
Cascade Reasoning: Model-specific logical optimization
-
Batch Processing: Import TXT or CSV files containing multiple descriptions
-
Automatic Prompt Evaluation: Quality scoring system for generated prompts
-
User Feedback System: Rate and comment on generated prompts
-
Multilingual Interface: Available in French and English
-
Easy Export Options: Save results in TXT, JSON, or CSV
- Windows 10 64-bit or newer
- Python 3.8 or newer
- Internet connection (for initial setup and when using the Google Gemini API)
- LM Studio (optional, for local LLM processing)
-
Download the Application:
- Download the latest version from my page
- Extract the ZIP file to your preferred location
-
Run the Installation Script:
- Open the extracted folder
- Double-click on
install.bat
- Wait for the installation to complete (this will create a virtual environment and install all dependencies)
-
Launch the Application:
- Once the installation is complete, double-click on
launcher.bat
- The application will open in your default web browser
- Once the installation is complete, double-click on
If you prefer manual installation:
-
Create a virtual environment: python -m venv venv
-
Activate the virtual environment: venv\Scripts\activate
-
Install the dependencies: pip install -r requirements.txt
-
Launch the application: python app.py
For local LLM processing without relying on external APIs, you can use LM Studio:
- Download and Install LM Studio:
-
Download from lmstudio.ai
-
Follow the installation instructions
- Download a Compatible Model:
-
Open LM Studio
-
Go to the "Models" tab
-
Download a model suitable for text generation (recommended: Mistral 7B, Llama 2, or similar)
- Start the Local Server:
-
In LM Studio, select your downloaded model
-
Click on "Local Server" in the left sidebar
-
Click "Start Server"
-
The server will run by default at http://127.0.0.1:1234/v1
- Configure the AI Prompt Optimizer:
-
In the AI Prompt Optimizer application, select "LM Studio (local)" as the LLM backend
-
The application will automatically connect to the local server
To use the Google Gemini API:
- Obtain an API Key:
- Visit Google AI Studio
- Create an account if you don’t have one
- Generate an API key
- Configure the Application:
-
In the AI Prompt Optimizer, select "Google Gemini Flash 2.0 (API)" as the LLM backend
-
Enter your API key in the designated field
-
The key will be saved for future sessions
- Select your target image model (SDXL, Stable Diffusion 1.5, etc.)
- Choose your preferred LLM backend
- Enter a description of the desired image in the text field
- Adjust the sliders for detail level, prompt length, and visual style as needed
- Click "Optimize Prompt"
- View the optimized prompt and its evaluation score
- Use the "Copy to Clipboard" button to copy the result
- Prepare a TXT file (one description per line) or a CSV file
- Go to the "Batch Processing" tab
- Upload your file
- Click "Process Batch"
- View the results table
- Export the results in your preferred format
- Access your prompt history in the "History" tab to review past optimizations
- Use the export options to save your prompts in TXT, JSON, or CSV format
- Use export options to save your prompts in TXT, JSON, or CSV formats
- Access your prompt history in the "History" tab
- Rate and provide feedback on prompts to improve future generations
- LM Studio Connection Error: Ensure the LM Studio server is running and accessible at http://127.0.0.1:1234/v1
- Google Gemini API Error: Verify your API key is correct and not expired
- Installation Failure: Make sure Python 3.8+ is installed and available in your PATH
- Application Crash: Check that all dependencies are properly installed
If you encounter issues not covered here:
- Visit the GitHub Issues page
- Submit a new issue with details about your problem
- Contact the maintainer at [email protected]
Contributions are welcome! Feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add an amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
This project is licensed under the MIT License – see the LICENSE file for details.
- Special thanks to the developers of Gradio, which powers my interface
- Recognition to the creators of the image generation models supported by this tool
If the AI Prompt Optimizer helps in your creative workflow, please consider supporting its development. Your contributions help maintain the project and add new features. If you wish, and especially if you can, support this project and my future ongoing projects!
Visit my Patreon link: https://www.patreon.com/preview/campaign?u=172098706&fan_landing=true&view_as=public