Skip to content

Commit 10fd1d7

Browse files
authored
[Bugfix] fix error due to an uninitialized tokenizer when using skip_tokenizer_init with num_scheduler_steps (#9276)
Signed-off-by: changjun.lee <pord7457@gmail.com>
1 parent 52b4f4a commit 10fd1d7

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

vllm/engine/output_processor/multi_step.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -178,7 +178,7 @@ def _process_seq_outputs(self, seq: Sequence,
178178
# generates a fixed number of tokens without evaluating stopping
179179
# conditions within the block. This can cause an eos token to be
180180
# unintentionally ignored.
181-
if not sampling_params.ignore_eos:
181+
if not sampling_params.ignore_eos and self.detokenizer:
182182
eos_token_id = self.get_tokenizer_for_seq(seq).eos_token_id
183183
# Avoiding .index calls as exception throwing in the happy path
184184
# is expensive.

0 commit comments

Comments
 (0)