Skip to content

Commit 2ffbb88

Browse files
a-r-r-o-w963658029SHYuanBest
authored
[training] CogVideoX-I2V LoRA (#9482)
* update * update * update * update * update * add coauthor Co-Authored-By: yuan-shenghai <963658029@qq.com> * add coauthor Co-Authored-By: Shenghai Yuan <140951558+SHYuanBest@users.noreply.github.com> * update Co-Authored-By: yuan-shenghai <963658029@qq.com> * update --------- Co-authored-by: yuan-shenghai <963658029@qq.com> Co-authored-by: Shenghai Yuan <140951558+SHYuanBest@users.noreply.github.com>
1 parent d40da7b commit 2ffbb88

File tree

4 files changed

+1656
-10
lines changed

4 files changed

+1656
-10
lines changed

examples/cogvideo/README.md

Lines changed: 11 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,11 @@ In a nutshell, LoRA allows adapting pretrained models by adding pairs of rank-de
1010

1111
At the moment, LoRA finetuning has only been tested for [CogVideoX-2b](https://huggingface.co/THUDM/CogVideoX-2b).
1212

13+
> [!NOTE]
14+
> The scripts for CogVideoX come with limited support and may not be fully compatible with different training techniques. They are not feature-rich either and simply serve as minimal examples of finetuning to take inspiration from and improve.
15+
>
16+
> A repository containing memory-optimized finetuning scripts with support for multiple resolutions, dataset preparation, captioning, etc. is available [here](https://github.com/a-r-r-o-w/cogvideox-factory), which will be maintained jointly by the CogVideoX and Diffusers team.
17+
1318
## Data Preparation
1419

1520
The training scripts accepts data in two formats.
@@ -132,6 +137,8 @@ Assuming you are training on 50 videos of a similar concept, we have found 1500-
132137
- 1500 steps on 50 videos would correspond to `30` training epochs
133138
- 4000 steps on 100 videos would correspond to `40` training epochs
134139

140+
The following bash script launches training for text-to-video lora.
141+
135142
```bash
136143
#!/bin/bash
137144

@@ -172,6 +179,8 @@ accelerate launch --gpu_ids $GPU_IDS examples/cogvideo/train_cogvideox_lora.py \
172179
--report_to wandb
173180
```
174181

182+
For launching image-to-video finetuning instead, run the `train_cogvideox_image_to_video_lora.py` file instead. Additionally, you will have to pass `--validation_images` as paths to initial images corresponding to `--validation_prompts` for I2V validation to work.
183+
175184
To better track our training experiments, we're using the following flags in the command above:
176185
* `--report_to wandb` will ensure the training runs are tracked on Weights and Biases. To use it, be sure to install `wandb` with `pip install wandb`.
177186
* `validation_prompt` and `validation_epochs` to allow the script to do a few validation inference runs. This allows us to qualitatively check if the training is progressing as expected.
@@ -197,8 +206,6 @@ Note that setting the `<ID_TOKEN>` is not necessary. From some limited experimen
197206
>
198207
> Note that our testing is not exhaustive due to limited time for exploration. Our recommendation would be to play around with the different knobs and dials to find the best settings for your data.
199208
200-
<!-- TODO: Test finetuning with CogVideoX-5b and CogVideoX-5b-I2V and update scripts accordingly -->
201-
202209
## Inference
203210

204211
Once you have trained a lora model, the inference can be done simply loading the lora weights into the `CogVideoXPipeline`.
@@ -227,3 +234,5 @@ prompt = (
227234
frames = pipe(prompt, guidance_scale=6, use_dynamic_cfg=True).frames[0]
228235
export_to_video(frames, "output.mp4", fps=8)
229236
```
237+
238+
If you've trained a LoRA for `CogVideoXImageToVideoPipeline` instead, everything in the above example remains the same except you must also pass an image as initial condition for generation.

0 commit comments

Comments
 (0)