Skip to content

Commit ba1713a

Browse files
luccafongheyselbi
authored andcommitted
[model] make llama4 compatible with pure dense layers (vllm-project#17315)
Signed-off-by: Lucia Fang <fanglu@fb.com>
1 parent 1fe447d commit ba1713a

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

vllm/model_executor/models/llama4.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -273,8 +273,8 @@ def __init__(
273273
cache_config=cache_config,
274274
prefix=f"{prefix}.self_attn",
275275
)
276-
is_moe_layer = (self.layer_idx +
277-
1) % config.interleave_moe_layer_step == 0
276+
is_moe_layer = config.interleave_moe_layer_step > 0 and (
277+
self.layer_idx + 1) % config.interleave_moe_layer_step == 0
278278
if is_moe_layer:
279279
self.feed_forward = Llama4MoE(
280280
config=config,

0 commit comments

Comments
 (0)