-
Notifications
You must be signed in to change notification settings - Fork 1.8k
Description
System Info
System Information:
Irrelevant
How would you like to use TensorRT-LLM
I would like to use the XQA kernel in a personal project. However, there is an unexpected license header at the top of the file:
https://github.com/NVIDIA/TensorRT-LLM/blob/main/cpp/kernels/xqa/mla_sm120.cu#L3
I have located this supposed license here:
https://github.com/NVIDIA/TensorRT-LLM/blob/main/cpp/kernels/fmha_v2/NVIDIA%20TensorRT%20Source%20Code%20License%20Agreement.pdf
That license was clearly never intended to be posted in a public repository, but rather seems to have been an older license for specific partners from before the code was open sourced, especially considering this contract goes far beyond what could be agreed to in a browsewrap:
(and much more such wording)
This also conflicts with the stated Apache license of the project:
https://github.com/NVIDIA/TensorRT-LLM/blob/main/LICENSE
NVIDIA has also recently integrated XQA to another open source project, under the same Apache License:
flashinfer-ai/flashinfer#1503
Am I correct to assume XQA as posted publicly on GitHub is licensed under the Apache license, not the older internal license?
Before submitting a new issue...
- Make sure you already searched for relevant issues, and checked the documentation and examples for answers to frequently asked questions.