Skip to content

Conversation

@nicolaschan
Copy link

I tried running this model on an RTX 4090 with 24GB of memory but the model does not fit in this. I encountered two issues with the current script which this PR resolves:

  • There is an attempt to check for lower VRAM GPUs, but this check happens after the .to("cuda") call loads the weights into VRAM, resulting in an OOM exception there before we can get to the low VRAM check.
  • Even with 24GB of VRAM, model offload runs out of memory so we should use sequential offload instead. This will slow down inference but brings memory usage down to ~2GB peak.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant