Skip to content

tlparse should report symint specialization in tensor subclass torch_dispatch #117

@zou3519

Description

@zou3519

Today, if a specialization for a symint happens inside a tensor subclass's torch_dispatch, we never actually report it in a call stack.

Instead, we see the user callstack, which can be confusing. Repro: run tlparse on the following, https://gist.github.com/zou3519/476a049ffd8070973dd31a4eba9f2ca7 , this produces the following tlparse output:

Image

Now, everytime I see this, I think the stack trace is wrong. Why is torch.cos causing a specialization??. Then, I realize that Tensor subclasses are a thing. I wish here it would tell me what the Tensor subclass is (TwoTensor in this case!) or the LOC at the Tensor subclass that does the specialization.

Not sure if this is possible, but this would really help with debugging vLLM (where Tensor subclasses are now abundant).

cc @chauhang @penguinwu @ezyang @bobrenjc93

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions