Skip to content

Conversation

edwardzjl
Copy link
Contributor

@edwardzjl edwardzjl commented Feb 18, 2025

The attention bias in vLLM's xformers backend is currently initialized on the default device, rather than the device of the Q/K/V tensors:

attn_bias = BlockDiagonalMask.from_seqlens(
attn_metadata.seq_lens, attn_metadata.encoder_seq_lens)

And here is how xformers decide which device to use:

https://github.com/facebookresearch/xformers/blob/8d91ce05a2f6a5ae059593922a631b9ff325b134/xformers/ops/fmha/attn_bias.py#L742:

class BlockDiagonalMask(AttentionBias):
    ...
    def from_seqlens(
        cls,
        q_seqlen: Sequence[int],
        kv_seqlen: Optional[Sequence[int]] = None,
        *,
        device: Optional[torch.device] = None,
    ) -> "BlockDiagonalMask":
        ...
        device = _get_default_bias_device(device)

https://github.com/facebookresearch/xformers/blob/8d91ce05a2f6a5ae059593922a631b9ff325b134/xformers/ops/fmha/attn_bias.py#L90

def _get_default_bias_device(device: Optional[torch.device] = None) -> torch.device:
    if device is None:
        if torch.cuda.is_available():
            return torch.device("cuda")
        return torch.device("cpu")
    return device

This becomes problematic when vLLM is used in conjunction with libraries like trl for GRPO training. In such cases, vLLM might be assigned to run on a specific GPU (e.g., the next available GPU after those used for training, which is the default behaviour of trl).

For example, if I have 8 GPUs and use cuda:0 to cuda:6 for GRPO training, vLLM will then be assigned to cuda:7. However, the current attention bias initialization will place the bias on cuda:0, leading to the following error:

[rank0]: ValueError: Attention bias and Query/Key/Value should be on the same device
[rank0]:   query.device: cuda:7
[rank0]:   attn_bias   : cuda:0

This PR will probably solve this issue.

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@edwardzjl edwardzjl force-pushed the attn_bias_device branch 3 times, most recently from f466344 to 87824ed Compare February 18, 2025 08:46
@edwardzjl
Copy link
Contributor Author

The pre-commit CI passed once, but failed after I signed off and force-pushed. I'm not sure why.

@edwardzjl
Copy link
Contributor Author

This could solve issues like huggingface/open-r1#278 and facebookresearch/xformers#1064 (comment)

@Isotr0py Isotr0py enabled auto-merge (squash) February 24, 2025 04:58
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Feb 24, 2025
@simon-mo simon-mo merged commit 75e9d49 into vllm-project:main Feb 25, 2025
57 of 60 checks passed
@dipta007
Copy link

using vllm==0.7.3, still having this issue
I think its not released yet

@Roxanne527
Copy link

same question, how to solve it ?

@edwardzjl
Copy link
Contributor Author

same question, how to solve it ?

You need to either install from the main branch, or wait for a release.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants