-
Notifications
You must be signed in to change notification settings - Fork 710
Open
Description
🐛 Bug
[rank1]: ValueError: Attention bias and Query/Key/Value should be on the same device
[rank1]: query.device: cuda:1
[rank1]: attn_bias : cuda:0
Command
To Reproduce
Steps to reproduce the behavior:
out = xformers.ops.fmha.memory_efficient_attention(q, k, v, attn_bias=attn_bias)
Code works fine on version 0.0.27.dev840 and 0.0.26.post1. After upgrading to new version, the code fails with the above error.
Expected behavior
Environment
Please copy and paste the output from the
environment collection script from PyTorch
(or fill out the checklist below manually).
You can run the script with:
# For security purposes, please check the contents of collect_env.py before running it.
python -m torch.utils.collect_env
- PyTorch Version (e.g., 1.0):
- OS (e.g., Linux):
- How you installed PyTorch (
conda
,pip
, source): - Build command you used (if compiling from source):
- Python version:
- CUDA/cuDNN version:
- GPU models and configuration:
- Any other relevant information:
Additional context
Metadata
Metadata
Assignees
Labels
No labels