From caf846834daacf17be8328b0b7af4ae643cd5819 Mon Sep 17 00:00:00 2001 From: Sayak Paul Date: Sun, 29 Sep 2024 10:40:53 +0530 Subject: [PATCH 1/2] Update distributed_inference.md to include `transformer.device_map` --- docs/source/en/training/distributed_inference.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/en/training/distributed_inference.md b/docs/source/en/training/distributed_inference.md index cd642d6aca07..17e26c145bd8 100644 --- a/docs/source/en/training/distributed_inference.md +++ b/docs/source/en/training/distributed_inference.md @@ -177,7 +177,7 @@ transformer = FluxTransformer2DModel.from_pretrained( ``` > [!TIP] -> At any point, you can try `print(pipeline.hf_device_map)` to see how the various models are distributed across devices. This is useful for tracking the device placement of the models. +> At any point, you can try `print(pipeline.hf_device_map)` to see how the various models are distributed across devices. This is useful for tracking the device placement of the models. You can also try `print(transformer.hf_device_map)` to see how the individual `transformer` model has been sharded across devices. Add the transformer model to the pipeline for denoising, but set the other model-level components like the text encoders and VAE to `None` because you don't need them yet. From 6b42bf8215dfa2a414082c1f32f330c9c63a25a6 Mon Sep 17 00:00:00 2001 From: Sayak Paul Date: Tue, 8 Oct 2024 07:57:12 +0530 Subject: [PATCH 2/2] Update docs/source/en/training/distributed_inference.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> --- docs/source/en/training/distributed_inference.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/en/training/distributed_inference.md b/docs/source/en/training/distributed_inference.md index 17e26c145bd8..0e1eb7962bf7 100644 --- a/docs/source/en/training/distributed_inference.md +++ b/docs/source/en/training/distributed_inference.md @@ -177,7 +177,7 @@ transformer = FluxTransformer2DModel.from_pretrained( ``` > [!TIP] -> At any point, you can try `print(pipeline.hf_device_map)` to see how the various models are distributed across devices. This is useful for tracking the device placement of the models. You can also try `print(transformer.hf_device_map)` to see how the individual `transformer` model has been sharded across devices. +> At any point, you can try `print(pipeline.hf_device_map)` to see how the various models are distributed across devices. This is useful for tracking the device placement of the models. You can also try `print(transformer.hf_device_map)` to see how the transformer model is sharded across devices. Add the transformer model to the pipeline for denoising, but set the other model-level components like the text encoders and VAE to `None` because you don't need them yet.