-
-
Notifications
You must be signed in to change notification settings - Fork 7.5k
[V1][Spec Decoding] Include bonus tokens in mean acceptance length #17908
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[V1][Spec Decoding] Include bonus tokens in mean acceptance length #17908
Conversation
I'm told the acceptance length metric - as reported by V0, in all our V1 benchmarking to date, in other inference engines, and in relevant papers - includes bonus tokens, so we should add 1 to our current calculation. Signed-off-by: Mark McLoughlin <markmc@redhat.com>
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
@@ -118,8 +118,8 @@ def main(): | |||
acceptance_counts[step] += count | |||
|
|||
print("-" * 50) | |||
print(f"mean acceptance length: \ | |||
{sum(acceptance_counts) / acceptance_counts[0]:.2f}") | |||
print(f"mean acceptance length (including bonus tokens): \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought the bonus token was already accounted for in acceptance_counts[0]
? Is that not the case? If so, why does acceptance_counts
have length num_spec_tokens + 1
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not as implemented here
num_accepted_tokens_per_pos=[0] * num_spec_tokens
for i in range(num_accepted_tokens):
self.num_accepted_tokens_per_pos[i] += 1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#16367 did encode num_drafts
as a count for the 0th position AFAIR, that might be what you're thinking of
…llm-project#17908) Signed-off-by: Mark McLoughlin <markmc@redhat.com> Signed-off-by: 汪志鹏 <wangzhipeng628@gmail.com>
…llm-project#17908) Signed-off-by: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Mu Huai <tianbowen.tbw@antgroup.com>
…llm-project#17908) Signed-off-by: Mark McLoughlin <markmc@redhat.com>
I'm told the acceptance length metric - as reported by V0, in all our V1 benchmarking to date, in other inference engines, and in relevant papers - includes bonus tokens, so we should add 1 to our current calculation.
See #17010 (comment) for context.
Also updated v1 spec decoding metrics design doc with this detail