Skip to content

Commit 2b432ac

Browse files
a-r-r-o-wNerogar
authored andcommitted
Fix hunyuan video attention mask dim (#10454)
* fix * add coauthor Co-Authored-By: Nerogar <nerogar@arcor.de> --------- Co-authored-by: Nerogar <nerogar@arcor.de>
1 parent 263b973 commit 2b432ac

File tree

1 file changed

+1
-0
lines changed

1 file changed

+1
-0
lines changed

src/diffusers/models/transformers/transformer_hunyuan_video.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -721,6 +721,7 @@ def forward(
721721

722722
for i in range(batch_size):
723723
attention_mask[i, : effective_sequence_length[i], : effective_sequence_length[i]] = True
724+
attention_mask = attention_mask.unsqueeze(1) # [B, 1, N, N], for broadcasting across attention heads
724725

725726
# 4. Transformer blocks
726727
if torch.is_grad_enabled() and self.gradient_checkpointing:

0 commit comments

Comments
 (0)