-
Couldn't load subscription status.
- Fork 611
[Feature]: Support speculative decoding #3945
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
28ddcc1 to
2c18726
Compare
lmdeploy/pytorch/engine/engine.py
Outdated
| self.speculative_config = speculative_config | ||
|
|
||
| if speculative_config is not None: | ||
| engine_config.prefill_interval = 16 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this change was used for debugging and can be removed
| device_type=engine_config.device_type, | ||
| distributed_executor_backend=engine_config.distributed_executor_backend, | ||
| dtype=engine_config.dtype, | ||
| speculative_config=speculative_config, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Speculative_config is define outside pytorch engine. It is not a good design to use it in any module besides Engine.
lmdeploy/pytorch/engine/engine.py
Outdated
| # input ids | ||
| token_ids = [msg.token_ids for msg in messages] | ||
|
|
||
| # spec decode |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove the comment.
lmdeploy/pytorch/engine/engine.py
Outdated
|
|
||
| def _debug_spec_stats(self, batched_outputs: BatchedOutputs, is_decoding: bool = False): | ||
| """Debugging spec stats.""" | ||
| is_debugging = True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need this after release the feature?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's just for debugging.
| logger.warning(f'Overriding HF config with {hf_overrides}') | ||
| override_hf_config(model_config.hf_config, hf_overrides) | ||
|
|
||
| # for serialization of transformers modules |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This might not work
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It works with tp case on one node, but not teste on dp case on multiple nodes
| inputs: ModelInputs, | ||
| cache_engine: CacheEngine, | ||
| stream: torch.cuda.Stream = None, | ||
| output_position_ids: bool = False, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
output position_ids is cheap, we can always output it.
| f'batch_size={inputs.seq_length.size(0)} ' | ||
| f'num_tokens={inputs.input_ids.size(-1)} ' | ||
| f'is_decoding={inputs.is_decoding}') | ||
| logger.info(f'<ForwardTask> rank[{rank}]: ' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
info this would be too verbose
| def __init__(self, | ||
| model_path: str, | ||
| engine_config: PytorchEngineConfig = None, | ||
| speculative_config: SpeculativeConfig = None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does RayEngine support speculative_config?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes.
| return k_states, v_states | ||
|
|
||
|
|
||
| @triton.testing.perf_report( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove bench
| input_buffers['position_ids'] = torch.zeros((1, max_tokens), dtype=torch.int64, device=device) | ||
| if getattr(self.config, 'use_flash_mla', False) is True: | ||
| import flash_mla | ||
| seqlens_dtype = torch.int64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
when would we need int64?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the default is int64. while mla, fa3 needs int32.
| """Get max tokens.""" | ||
| num_tokens = input_ids.size(1) | ||
| orig_batch = q_seqlens.size(0) | ||
| if num_tokens == orig_batch: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not think sending tensor here is a good idea.
lmdeploy/pytorch/paging/scheduler.py
Outdated
| self.scheduler_config = scheduler_config | ||
| self.cache_config = cache_config | ||
|
|
||
| self.num_spec_tokens = num_spec_tokens |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Has this value been used?
| @@ -0,0 +1,9 @@ | |||
| # Copyright (c) OpenMMLab. All rights reserved. | |||
|
|
|||
| from .deepseek_mtp import DeepseekMTP # noqa F401 | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't want model w/o spec decoding loading these modules.
| def get_logits(self, hidden_states: torch.Tensor): | ||
| """Get logits of model output.""" | ||
| draft_model = self.model | ||
| if not isinstance(draft_model, torch.nn.Module): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
graph_runner has expose get_logits of model.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. but eagle do not have get_logits while eagle3 has . Base on graph_runner's get_logits method, we cannot differ these two. That's why here check if original model has get_logits
| expected_output_token_ids = torch.tensor([[0, 1, 2], [0, -1, -1], [1, -1, -1]], dtype=torch.long).cuda() | ||
|
|
||
| draft_probs = None | ||
| target_probs, draft_token_ids, bonus_token_ids, max_spec_len = torch.load('tmp.pt') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test requires tmp.pt, do not place it here.
Could we add it in unit test?
Motivation
Support speculative decoding
Examples
pipeline
serving
BC-breaking (Optional)
Does the modification introduce changes that break the backward-compatibility of the downstream repositories?
If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.
Use cases (Optional)
If this PR introduces a new feature, it is better to list some use cases here, and update the documentation.
Checklist