-
-
Notifications
You must be signed in to change notification settings - Fork 239
Error Messages List
This is a work in progress list and therefore some instructions need detail adding e.g. the requirements file locations etc.
Explanation and Fix
This error occurs when Python cannot find a module that your code is trying to import. It typically means that either:
- The required package is not installed in your environment.
- The package is installed but not in the Python path.
To fix this:
-
Ensure you're using the correct Python environment:
- On Windows: Run
start_environment.bat - On Linux: Run
./start_environment.sh
- On Windows: Run
-
Ensure all requirements are installed correctly:
pip install -r requirements.txt -
If the error persists, try uninstalling and reinstalling the specific package with the force and upgrade options:
pip uninstall XXXXXXX pip install XXXXXXX --upgrade --force-reinstall -
Clear the pip cache to ensure you're not using any corrupted downloads:
pip cache purge -
Check if the package name in your code matches exactly with the installed package name.
If you're still experiencing issues, ensure you're using the correct version of Python for your project.
Explanation and Resolution
You may encounter the following error when starting AllTalk:
Traceback (most recent call last):
File "C:\AI\alltalk_tts\tts_server.py", line 148, in
from ffmpeg.asyncio import FFmpeg
ModuleNotFoundError: No module named 'ffmpeg.asyncio'
[AllTalk TTS] Warning TTS Engine has NOT started up yet. Will keep trying for 240 seconds maximum. Please wait.
[AllTalk TTS] Warning Mechanical hard drives and a slow PCI BUS are examples of things that can affect load times.
[AllTalk TTS] Warning Some TTS engines index their AI TTS models on loading, which can be slow on CPU or old systems.
[AllTalk TTS] Warning Using one of the other TTS engines on slower systems can help ease this issue.
Please check the Windows sections if you are on Windows and the Linux sections further down in "Manual Installation of ffmpeg"
Windows: This error typically occurs when the necessary build tools are not properly installed or configured on your Windows system. Specifically, it's likely related to missing or incorrectly installed:
- "MSVC v143 - VS 2022 C++ x64/x86 build tools"
- "Windows 10 SDK" or "Windows 11 SDK" (depending on your version of Windows)
These tools are required for compiling certain Python packages, including the FFmpeg module that AllTalk depends on.
To resolve this issue, follow these steps:
-
Delete the
\alltalk_tts\alltalk_environment\folder to ensure a clean slate. -
Reinstall the necessary build tools:
a. Download Visual Studio 2022 Community Edition:
- Go to the Visual Studio downloads page
- Look for "Visual Studio 2022" under the "Community" edition
- Click the "Free download" button under Visual Studio Community 2022
b. Run the installer and select the following components:
- "MSVC v143 - VS 2022 C++ x64/x86 build tools"
- "Windows 10 SDK" or "Windows 11 SDK" (depending on your Windows version)
-
After installing the build tools, reinstall AllTalk following the standard installation procedure.
For detailed instructions on installing these requirements, refer to the AllTalk wiki page: Install ‐ WINDOWS ‐ Python C & SDK Requirements
- Ensure you have administrator rights when installing these tools and AllTalk.
- The installation of Visual Studio build tools might take some time, so be patient during the process.
- If you're using an older version of Windows, make sure to install the appropriate SDK version.
Always check that you have the correct build tools installed before setting up AllTalk or any Python environment that requires compiled modules. This can save time troubleshooting compilation-related errors later.
If the above steps don't resolve the issue, you can try manually installing ffmpeg using conda in your Python environment. Follow these instructions based on your operating system and the environment you're using.
If you do not understand Python environments, when you are in a Python environment or what a Python environment really is, then please read the quick explainer here Understanding Python Environments Simplified
Windows:
- Open a command prompt in the AllTalk folder
alltalk_tts - Activate AllTalk's custom Python environment:
start_environment.bat - Navigate to the Conda scripts folder:
cd alltalk_environment\conda\Scripts - Install ffmpeg:
conda install -y conda-forge::ffmpeg
Linux:
- Open a terminal in the AllTalk folder
alltalk_tts - Activate AllTalk's custom Python environment:
./start_environment.sh - Navigate to the Conda scripts folder:
cd alltalk_environment/conda/Scripts - Install ffmpeg:
conda install -y conda-forge::ffmpeg
Windows:
- Open a command prompt in the Text Generation Web UI folder
text-generation-webui - Activate Text Generation Web UI's custom Python environment:
cmd_windows.bat - Navigate to the Conda scripts folder:
cd installer_files\conda\Scripts - Install ffmpeg:
conda install -y conda-forge::ffmpeg
Linux:
- Open a terminal in the Text Generation Web UI folder
text-generation-webui - Activate Text Generation Web UI's custom Python environment:
./cmd_linux.sh - Navigate into the Text Generation Web UI's conda scripts folder:
cd installer_files/conda/Scripts - Install ffmpeg:
conda install -y conda-forge::ffmpeg
- Make sure you have internet connectivity when running these commands.
- If you encounter permission issues, you may need to run the command prompt or terminal as an administrator.
- After installation, restart your application or script to ensure the changes take effect.
- If you still encounter the "No module named 'ffmpeg.asyncio'" error after installing ffmpeg, you may need to reinstall or update the ffmpeg-python package:
Explanation and Fix
This error occurs when Python is unable to load a required DLL (Dynamic-Link Library) file. This can happen due to several reasons:
- The DLL file is missing or corrupted.
- There's a mismatch between the Python version and the package version.
- The system's PATH environment variable is not set correctly.
To fix this:
-
Ensure you're using the correct Python environment:
- On Windows: Run
start_environment.bat - On Linux: Run
./start_environment.sh
- On Windows: Run
-
Reinstall the package that's causing the error with force and upgrade options:
pip uninstall package_name pip install package_name --upgrade --force-reinstall -
Clear the pip cache to ensure you're not using any corrupted downloads:
pip cache purge -
If the error persists, try installing the package with a specific version that's known to be compatible with your Python version:
pip install package_name==X.X.X -
Check if all required Visual C++ Redistributables are installed on your system.
-
Verify that your system's PATH environment variable includes the directory containing the DLL files.
If the problem continues, you may need to investigate which specific DLL is failing to load and ensure it's present in your system.
Explanation and Fix
This error occurs on Linux systems when trying to install Python packages that require compilation of C extensions. It happens because the GNU Compiler Collection (GCC) is not installed on the system.
Python often relies on C extensions for performance-critical parts of packages. When installing such packages, Python uses the system's C compiler (typically gcc) to build these extensions. If gcc is not available, the installation fails.
To fix this:
-
Install the necessary compilers using the following command:
sudo apt install gcc g++ -
After installation, retry the Python package installation.
Note: You may need to install additional development libraries depending on the package you're trying to install. For example, Python development files might be required:
sudo apt install python3-dev
After installing gcc and any necessary development libraries, remember to rerun your package installation command.
Explanation and Fix
This error occurs when there's a version mismatch between PyTorch and DeepSpeed. The error message typically looks like this:
RuntimeError: PyTorch version mismatch! DeepSpeed ops were compiled and installed with a different version than what is being used at runtime. Please re-install DeepSpeed or switch torch versions. Install torch version=2.2, Runtime torch version=2.3
ERROR: Application startup failed. Exiting.
This error indicates that DeepSpeed was compiled and installed with a different version of PyTorch than the one currently being used at runtime.
To fix this issue:
-
Ensure that you have the correct versions of PyTorch and DeepSpeed installed for your project.
-
Re-install DeepSpeed with the correct PyTorch version.
-
If the problem persists, you may need to switch your PyTorch version to match the one DeepSpeed was compiled with.
For detailed instructions on installing the correct versions and resolving this issue, please refer to one of the following guides:
- For Linux: Install - LINUX - Requirements and DeepSpeed
- For Windows: Windows - Requirements & DeepSpeed
These guides provide step-by-step instructions for setting up the correct environment and resolving version conflicts.
Explanation and Potential Solutions
This warning appears when using Coqui TTS models and indicates that the input text exceeds the character limit for the specified language. The warning typically looks like this:
Warning: The text length exceeds the character limit of XXX for language 'XX', this might cause truncated audio.
- Coqui TTS models have predefined character limits for each supported language.
- These limits are set in the tokenizer and are based on the model's training data.
- Exceeding this limit may result in degraded audio quality or truncation of the generated speech.
- This is a warning, not an error. The system will still attempt to process the text, but the results may not be optimal.
- The actual impact on audio quality can vary depending on the specific model and the extent to which the limit is exceeded.
-
Reduce Input Text: The most straightforward solution is to shorten your input text to fall within the character limit for the chosen language.
-
Split Text: For longer passages, consider splitting the text into smaller segments that fall within the limit and process them separately.
-
Adjust Tokenizer Limits: While it's possible to modify the character limits in the tokenizer, this won't actually improve the model's performance or remove the underlying limitation. It will only suppress the warning.
To adjust the limit (not recommended):
tokenizer = TTSTokenizer.from_pretrained(model_name) tokenizer.max_input_tokens = new_limit # e.g., 1000 tokenizer.save_pretrained(output_dir)
Note: This modification only affects the warning threshold, not the model's actual capabilities.
For more detailed discussions on this topic, you can refer to:
🟦 TypeError in Transformers Library with Coqui XTTS TTS Engine *prepare*attention_mask_for_generation
Explanation and Fix
This error occurs due to a version incompatibility between the Transformers library and the Coqui XTTS TTS engine. The error message typically looks like this:
alltalk_tts_v2beta\alltalk_environment\env\Lib\site-packages\transformers\generation\utils.py", line 498, in *prepare*attention_mask_for_generation
torch.isin(elements=inputs, test_elements=pad_token_id).any()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: isin() received an invalid combination of arguments - got (elements=Tensor, test_elements=int, ), but expected one of:
* (Tensor elements, Tensor test_elements, *, bool assume_unique, bool invert, Tensor out)
* (Number element, Tensor test_elements, *, bool assume_unique, bool invert, Tensor out)
* (Tensor elements, Number test_element, *, bool assume_unique, bool invert, Tensor out)
This error is specific to the Coqui XTTS TTS engine and occurs when it's used with a newer version of the Transformers library that is incompatible with the current Coqui XTTS implementation.
To resolve this issue, you need to install a specific older version of the Transformers library that is compatible with the current Coqui XTTS TTS engine. Follow these steps:
-
Activate your Python environment:
- On Windows: Run
start_environment.bat - On Linux: Run
./start_environment.sh
- On Windows: Run
-
Install the compatible version of Transformers:
pip install transformers==4.40.2
The Coqui TTS team at Idiap is working on an update to make the XTTS engine compatible with newer versions of Transformers. However, this update is not yet ready for general release. You can track the progress of this update here: Idiap Coqui-AI TTS Update
Until this update is released, using the older version of Transformers as described above is the recommended solution.
🟦 ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host
Explanation and Troubleshooting Steps
This error is a network-related issue that occurs when a connection between the client (your web browser running Gradio) and the server (AllTalk backend) is unexpectedly terminated.
ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host
This error indicates that the network request from Gradio in your web browser couldn't reach the AllTalk backend. It's a general network error that can have various causes.
- AllTalk is not running or has crashed.
- Firewall or antivirus software is intercepting the network request.
- Network configuration issues.
- Recent security updates or browser extensions blocking localhost or specific IP ranges.
-
Restart the System: Often, a simple system restart can resolve this issue.
-
Check AllTalk Status: Ensure that AllTalk is running and hasn't crashed.
-
Try a Different Browser: If the issue persists in one browser, try accessing AllTalk through a different web browser.
-
Use Local IP Address: Instead of using localhost or 127.0.0.1, try accessing AllTalk using your machine's local IP address. For example:
http://192.168.1.20:7852/?__theme=dark(Replace 192.168.1.20 with your actual local IP address)
-
Check Firewall and Antivirus: Temporarily disable your firewall and antivirus software to see if they're causing the issue. If this resolves the problem, add an exception for AllTalk.
-
Review Browser Extensions: Disable browser extensions, especially security-related ones, to see if they're interfering with the connection.
-
Network Reset: In some cases, resetting your network settings might help:
- Open Command Prompt as Administrator
- Run these commands:
netsh winsock reset netsh int ip reset ipconfig /release ipconfig /renew ipconfig /flushdns - Restart your computer
-
Check for Recent Updates: If the issue started after a recent update, consider if any security patches might be affecting local connections. For example, there have been recent concerns about the "0-day flaw" in browsers: 18-year-old security flaw in Firefox and Chrome exploited in attacks
If the issue persists after trying these steps, it may be worth checking for any system-wide network issues or consulting with a network administrator if you're on a managed network.
Explanation and Resolution
During the installation of requirements, you may encounter the following warning:
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
coqui-tts 0.24.1 requires transformers<4.41.0,>=4.33.0, but you have transformers 4.43.3 which is incompatible.
This warning is related to a version conflict between Coqui-TTS and the Transformers library. It's connected to the "TypeError in Transformers Library with Coqui XTTS TTS Engine" issue discussed earlier.
Key points to understand:
- Coqui-TTS and the XTTS engine currently require Transformers version 4.40.2 for generating streaming audio.
- Other TTS engines (like Parler-TTS) may attempt to install a later version of Transformers.
- This conflict primarily affects streaming audio generation with the Coqui XTTS engine.
- Non-streaming audio generation is not affected by this issue.
- Streaming audio with XTTS: Affected (requires Transformers 4.40.2)
- Non-streaming audio with XTTS: Not affected (works with any Transformers version)
- Other uses (e.g., Kobold): May be affected if using XTTS with streaming audio
To resolve this issue and enable streaming audio with XTTS:
-
Downgrade Transformers to version 4.40.2:
pip install transformers==4.40.2 -
Be aware that this may generate a dependency warning for Parler-TTS, but it should still function correctly.
- This is a temporary solution until the Coqui-TTS engine is updated to support newer versions of Transformers for streaming audio.
- AllTalk is not responsible for the maintenance or upkeep of the Coqui-TTS engine.
- You can re-run the setup process and reapply requirements in the future if you need to change this configuration.
The Coqui-TTS team is working on updating their engine to support newer versions of Transformers. Once this update is released, it should resolve the conflict without requiring a specific version downgrade.
If you primarily use XTTS for streaming audio, it's recommended to downgrade Transformers to 4.40.2. If you don't use XTTS streaming or can work without it temporarily, you may keep the latest version of Transformers installed.
Explanation and Resolution
When using SillyTavern with AllTalk, you may encounter the following error:
HTTP 404: Failed to fetch audio data
This error typically occurs due to a version mismatch between the AllTalk extension in SillyTavern and the version of AllTalk you're running.
Key points to understand:
- SillyTavern currently comes bundled with the AllTalk version 1 extension by default.
- If you're running AllTalk version 2, the bundled extension will be incompatible, leading to this error.
- The extension needs to be manually updated to match your AllTalk version.
To resolve this issue, you need to update the AllTalk extension in SillyTavern to match your AllTalk version. Here's what to do:
- Determine which version of AllTalk you're running (likely version 2 if you're seeing this error).
- Follow the instructions provided in the AllTalk wiki to update the SillyTavern extension.
Detailed instructions for updating the extension can be found here: SillyTavern Extension Update Instructions
- Always ensure that your SillyTavern AllTalk extension version matches the version of AllTalk you're running.
- Keep an eye on both AllTalk and SillyTavern updates, as you may need to manually update the extension after upgrading either component.
In the future, the extension will be updated in SillyTavern. Until then, manual updates are necessary to ensure compatibility.
Before reporting issues with AllTalk integration in SillyTavern, always verify that you have the correct extension version installed. If you're unsure, refer to the AllTalk documentation or the SillyTavern Extension wiki page linked above.
Explanation
You may encounter the following warning when running PyTorch-based applications:
UserWarning: 1Torch was not compiled with flash attention.
This warning is related to a feature called "Flash Attention" in PyTorch, which is an optimization for certain types of attention mechanisms in neural networks.
Key points to understand:
- This is a generic PyTorch issue and not specific to AllTalk or any particular application.
- Flash Attention is not currently supported on Windows systems.
- On Windows, this message is a notification rather than an error.
- For Windows users: This warning can be safely ignored. It does not affect the functionality of your PyTorch-based applications.
- For non-Windows users: If you're seeing this on a non-Windows system where Flash Attention should be supported, you might want to investigate further.
PyTorch checks if it was compiled with Flash Attention support. On Windows, where this feature is not available, the check always returns false, resulting in this warning.
For Windows users:
- No action is required. You can safely ignore this warning.
- If you want to suppress the warning, you can use Python's warning filters, but this is generally unnecessary.
For non-Windows users seeing this unexpectedly:
- Ensure you have the latest version of PyTorch installed.
- Check if your CUDA installation (if applicable) is up to date and compatible with your PyTorch version.
This issue has been widely discussed in the PyTorch community. For more details, you can refer to the following GitHub issue: PyTorch Issue #108175
Unless you specifically need Flash Attention for your work (which is rare for most users), you can continue using PyTorch as normal. This warning does not indicate a problem with your setup or code on Windows systems.
🟦 IMPORTANT: You are using gradio version 4.32.2, however version 5.x.x is available, please upgrade.
Explanation and Troubleshooting Steps
Gradio's developers just like telling everyone to upgrade to their latest version. I cant stop the messages appearing and I have as yet not tested later versions of Gradio, the messages are safe to ignore.