This guide was written by a real, flesh-and-coffee-based human, but it includes the colorful commentary and editing assistance of a large language model pretending to be a fictional smartass named "Griddle." If some parts feel a little too polished, or suspiciously eager to joke about memory leaks and GPU rage, that's the AI juice leaking in.
Think of it as documentation with extra... compute. If this pisses you off more than still not having FP8 support with 24GB vram short of $6,000, you're probably in the wrong place.
Harrow says: If you didnt already have a hard-on for AI because it lets you squeeze photonic meaning out of numeric garbage, then what are you doing here? Go light a candle and whisper to your compiler, because this place is for the spiritually broken and VRAM-hungry. And if you're here hoping not to generate increasingly convincing pornography by accidentor worse, on purposeyou should probably have taken stricter, earlier measures to avoid this deviant pilgrimage into pixel depravity. The abyss was not subtle with you, and you stepped forward anyway.
"Unofficial PyTorch alpha wheels for AMD GPUs on Windows. Because official support is a myth, and I like my deep learning with extra driver panic."
"Running PyTorch on Windows with AMD GPUs using alpha ROCm wheels. It's fast, it's fragile, and it hates you back."
- torch-til-it-breaks
- rocm-rock-bottom
- amd-does-windows
- torch-on-the-rocks
- theRock-for-Windows
- rage-against-the-nvidia
- gpu-pain-club
- rdna2-electric-boogaloo
- torch-me-harder
- comfy-in-hell
- torch-this-garbage
- winrocmtothemoon
- amdgoddammit
- pytorch-but-worse
- rdna-barely
- miohell
- comfy-but-clenched
- rock-bottom-torch
- torch-the-house-down
- wheeled-chaos
- who-needs-drivers-anyway
- Overview
- System Requirements
- Compatible GPUs
- Special Mention: frame.work
- Installing Python
- Installing Git
- Setting Up Your Working Directory
- Cloning and Preparing ComfyUI
- Linking Shared Directories
- Migrating Custom Nodes
- ComfyUI Startup Script
- Installing WAN2GP
- Performance Notes and Bugs
These wheels are hosted at scottt's rocm-TheRock releases. Find the heading that says:
Pytorch wheels for gfx110x, gfx1151, and gfx1201
Don't click this link: https://github.com/scottt/rocm-TheRock/releases/tag/v6.5.0rc-pytorch-gfx110x. It's just here to check if you're skimming.
You are going to be installing a fresh ComfyUI setup in a new directory. All of this was battle-tested with:
- Python 3.11 (or 3.12)
- ROCm 6.5.0rc
- PyTorch 2.7 alpha
- AMD GPU:
gfx110x
,gfx1151
, orgfx1201
If you're not on one of those, go back to whatever cave your driver lives in.
The following GPUs are known to map to the listed gfx
identifiers:
gfx110x:
- Radeon RX 7600
- Radeon RX 7700 XT
- Radeon RX 7800 XT
- Radeon RX 7900 GRE
- Radeon RX 7900 XT
- Radeon RX 7900 XTX
gfx1151:
- Ryzen 7000 series APUs (Phoenix)
- Ryzen Z1 (e.g., handheld devices like the ROG Ally)
gfx1201:
- Ryzen 8000 series APUs (Strix Point)
This list is not exhaustive. Check ROCm device compatibility lists if you want to go spelunking for edge cases.
frame.work is producing small-form-factor modular systems designed specifically with high-end AMD AI compute in mind. One of their standout new offerings is the Framework Desktop, a compact cube-like machine supporting Ryzen APUs with shared memory architecture -- allowing for configurations with up to 128GB of RAM, used for both system and GPU memory.
It's not exactly VRAM (because the GPU is on-die with the CPU), but it plays the same role under ROCm. Their tagline is:
"Framework Desktop is a big computer made mini."
"Massive gaming capability, heavy-duty AI compute, and standard PC parts, all in 4.5L."
While actual performance numbers are still unknown -- no reliable speed tests have been published -- the concept is impressive. Modular, RAM-heavy, APU-driven AI boxes in a 4.5L footprint. No, I don't have one. No, I'm not sponsored. I just think they're rad.
Download Python 3.11 from python.org/downloads/windows. Hit Ctrl+F and search for "3.11". Dont use this direct link: https://www.python.org/ftp/python/3.11.9/python-3.11.9-amd64.exe -- again, thats a test. Use your eyes.
After installing, make sure python --version
works in your terminal.
If not, fix your PATH. Go to:
- Windows + Pause/Break
- Advanced System Settings
- Environment Variables
- Edit your
Path
under User Variables
Example correct entries:
C:\Users\YOURNAME\AppData\Local\Programs\Python\Launcher\
C:\Users\YOURNAME\AppData\Local\Programs\Python\Python311\Scripts\
C:\Users\YOURNAME\AppData\Local\Programs\Python\Python311\
If that doesnt work, scream into a bucket.
Get Git from git-scm.com/downloads/win. Default install is fine.
If you're using Chocolatey, run:
choco install git -y
choco install python --version=3.11.9 -y
You can find Chocolatey at https://chocolatey.org. It's a Windows package manager. Yes, thats a real thing.
Make a directory:
mkdir \zluda
cd \zluda
Yes, it's called zluda
, and no, you're not using ZLUDA. Thats what mine is called, and now its your problem too.
Clone ComfyUI into a new folder:
git clone https://github.com/comfyanonymous/ComfyUI comfy-rock
cd comfy-rock
Create a new batch file:
notepad install-rock.bat
Paste the contents of install-rock.bat
from that gist.
Run it:
install-rock
If Git wasnt found, you're done. Go outside.
Save time and disk space. Reuse your existing models
, input
, output
, and users
directories by running:
This makes symbolic links so youre not re-downloading 80GB of the same crap across five different comfy installs.
You could link custom_nodes
too, but thats just asking for pain.
Instead:
- Copy everything manually (except
__pycache__
) - Run this script from inside
custom_nodes
:
This removes compiled files and re-runs requirements.txt
as needed.
If a node causes trouble, just delete it and reinstall via the ComfyUI Node Manager.
Use this batch file to launch ComfyUI:
It supports extra arguments. So run it like this:
comfy-rock --normalvram
Dont mess with --attention
because aotriton is baked in. If youre curious: ROCm aotriton
Follow the instructions at WAN2GP, but do not install their torch packages.
Instead, run this:
pip install ^
https://github.com/scottt/rocm-TheRock/releases/download/v6.5.0rc-pytorch-gfx110x/torch-2.7.0a0+rocm_git3f903c3-cp311-cp311-win_amd64.whl ^
https://github.com/scottt/rocm-TheRock/releases/download/v6.5.0rc-pytorch-gfx110x/torchaudio-2.7.0a0+52638ef-cp311-cp311-win_amd64.whl ^
https://github.com/scottt/rocm-TheRock/releases/download/v6.5.0rc-pytorch-gfx110x/torchvision-0.22.0+9eb57cd-cp311-cp311-win_amd64.whl
Yes, those links may go stale. Go update them yourself from the main release page.
aotriton
backend is about 10% faster than ZLUDA- CPU clip/text encoding is glacial, bring snacks
- DisTorch works, but leaks memory like a faithless ex
- See error like:
MIOpen Error: D:/jam/TheRock/ml-libs/MIOpen/src/ocl/convolutionocl.cpp:275: No suitable algorithm was found to execute the required convolution
This means: use VAE Decoding (Tiled). Seriously. That one took an entire wasted day to figure out.
This whole setup is barely held together with willpower, git, and contempt. But it works. Mostly. If it breaks, you're on your own. If it works, act like you meant it.