Skip to content

Commit d39f5b5

Browse files
committed
update readme
1 parent b57c052 commit d39f5b5

File tree

1 file changed

+25
-18
lines changed

1 file changed

+25
-18
lines changed

README.md

+25-18
Original file line numberDiff line numberDiff line change
@@ -78,6 +78,9 @@ steps show the relative improvements of the checkpoints:
7878

7979
Stable Diffusion is a latent diffusion model conditioned on the (non-pooled) text embeddings of a CLIP ViT-L/14 text encoder.
8080

81+
82+
#### Sampling Script
83+
8184
After [obtaining the weights](#weights), link them
8285
```
8386
mkdir -p models/ldm/stable-diffusion-v1/
@@ -88,24 +91,6 @@ and sample with
8891
python scripts/txt2img.py --prompt "a photograph of an astronaut riding a horse" --plms
8992
```
9093

91-
Another way to download and sample Stable Diffusion is by using the [diffusers library](https://github.com/huggingface/diffusers/tree/main#new--stable-diffusion-is-now-fully-compatible-with-diffusers)
92-
```py
93-
# make sure you're logged in with `huggingface-cli login`
94-
from torch import autocast
95-
from diffusers import StableDiffusionPipeline, LMSDiscreteScheduler
96-
97-
pipe = StableDiffusionPipeline.from_pretrained(
98-
"CompVis/stable-diffusion-v1-3-diffusers",
99-
use_auth_token=True
100-
)
101-
102-
prompt = "a photo of an astronaut riding a horse on mars"
103-
with autocast("cuda"):
104-
image = pipe(prompt)["sample"][0]
105-
106-
image.save("astronaut_rides_horse.png")
107-
```
108-
10994
By default, this uses a guidance scale of `--scale 7.5`, [Katherine Crowson's implementation](https://github.com/CompVis/latent-diffusion/pull/51) of the [PLMS](https://arxiv.org/abs/2202.09778) sampler,
11095
and renders images of size 512x512 (which it was trained on) in 50 steps. All supported arguments are listed below (type `python scripts/txt2img.py --help`).
11196

@@ -149,6 +134,28 @@ non-EMA to EMA weights. If you want to examine the effect of EMA vs no EMA, we p
149134
which contain both types of weights. For these, `use_ema=False` will load and use the non-EMA weights.
150135

151136

137+
#### Diffusers Integration
138+
139+
Another way to download and sample Stable Diffusion is by using the [diffusers library](https://github.com/huggingface/diffusers/tree/main#new--stable-diffusion-is-now-fully-compatible-with-diffusers)
140+
```py
141+
# make sure you're logged in with `huggingface-cli login`
142+
from torch import autocast
143+
from diffusers import StableDiffusionPipeline, LMSDiscreteScheduler
144+
145+
pipe = StableDiffusionPipeline.from_pretrained(
146+
"CompVis/stable-diffusion-v1-3-diffusers",
147+
use_auth_token=True
148+
)
149+
150+
prompt = "a photo of an astronaut riding a horse on mars"
151+
with autocast("cuda"):
152+
image = pipe(prompt)["sample"][0]
153+
154+
image.save("astronaut_rides_horse.png")
155+
```
156+
157+
158+
152159
### Image Modification with Stable Diffusion
153160

154161
By using a diffusion-denoising mechanism as first proposed by [SDEdit](https://arxiv.org/abs/2108.01073), the model can be used for different

0 commit comments

Comments
 (0)