Skip to content

vulkan: support copy from f32 to q4_0/q4_1/q5_0/q5_1/q8_0/iq4_nl #11166

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Jan 16, 2025

Conversation

jeffbolznv
Copy link
Collaborator

@jeffbolznv jeffbolznv commented Jan 9, 2025

Shaders are based on cpy.cu.

For #11127.

This supports the same set of quants to be converted from f32 as CUDA. Looks like CUDA also supports OP_CPY for Q8_0 to F32, and for any quant to itself. I don't know if those are required, but they wouldn't be hard to add if so.

I haven't done any perf testing of these. CUDA is also using one thread per CTA, which sounds kind of slow but maybe it's not a perf-critical operation. In fact, the only testing I've done is test-backend-ops. I'll try to pull this into stable-diffusion.cpp to test.

CC @stduhpf

@jeffbolznv jeffbolznv requested a review from 0cc4m January 9, 2025 20:52
@github-actions github-actions bot added Vulkan Issues specific to the Vulkan backend ggml changes relating to the ggml tensor library for machine learning labels Jan 9, 2025
@jeffbolznv
Copy link
Collaborator Author

stable_diffusion.cpp "works" with this change, but loading the models is way slower and seems to be doing the quantization on the CPU. @stduhpf any idea?

@stduhpf
Copy link
Contributor

stduhpf commented Jan 9, 2025

Thanks a lot! It seems to work very well for supported types.

loading the models is way slower and seems to be doing the quantization on the CPU

Are you talking about quantization of the LoRA or of the base model? Because if you're talking about the base model, I think this is the expected behaviour even without those changes, even though it would be nice to quantize using GPU now that the shaders exist for it. It's even single-threaded, so it takes forever with larger models.

I didn't notice any slowness loading the LoRA, and it looks like it was using the GPU as expected.

@jeffbolznv
Copy link
Collaborator Author

Yeah, I think it was the base model. Thanks for clarifying.

@slaren
Copy link
Member

slaren commented Jan 10, 2025

In llama.cpp, F32 -> Quant is needed for KV quantization, and Quant -> F32 conversion is used for context shifts when quantizing the K cache.

@jeffbolznv
Copy link
Collaborator Author

We also have GET_ROWS supporting dequant. Does context shifting use CPY or GET_ROWS?

@slaren
Copy link
Member

slaren commented Jan 10, 2025

It uses CPY.

@jeffbolznv
Copy link
Collaborator Author

I started implementing q->f32, noticed there weren't backend tests, and when I added them they error out because ggml_compute_forward_dup doesn't support q->f32. Am I missing something?

  CPY(type_src=q4_0,type_dst=f32,ne=[256,4,4,4],permute=[0,0,0,0]): C:\github\jeffbolznv\llama.cpp\ggml\src\ggml-cpu\ggml-cpu.c:3996: fatal error

@github-actions github-actions bot added the testing Everything test related label Jan 10, 2025
@jeffbolznv
Copy link
Collaborator Author

I've gone ahead and implemented the missing ggml-cpu function.

Copy link
Member

@slaren slaren left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The CPU changes look good.

Copy link
Collaborator

@0cc4m 0cc4m left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, and this enables k-cache quantization for Vulkan, very nice. v-cache quantization as well, but only with flash attention.

@0cc4m 0cc4m merged commit bd38dde into ggml-org:master Jan 16, 2025
48 checks passed
tinglou pushed a commit to tinglou/llama.cpp that referenced this pull request Feb 13, 2025
…l-org#11166)

* vulkan: support copy from f32 to q4_0/q4_1/q5_0/q5_1/q8_0/iq4_nl

Shaders are based on cpy.cu.

* vulkan: support copy from q4_0/q4_1/q5_0/q5_1/q8_0/iq4_nl to f32

* ggml: copy q->f32 assumes some contiguity in the destination
arthw pushed a commit to arthw/llama.cpp that referenced this pull request Feb 26, 2025
…l-org#11166)

* vulkan: support copy from f32 to q4_0/q4_1/q5_0/q5_1/q8_0/iq4_nl

Shaders are based on cpy.cu.

* vulkan: support copy from q4_0/q4_1/q5_0/q5_1/q8_0/iq4_nl to f32

* ggml: copy q->f32 assumes some contiguity in the destination
mglambda pushed a commit to mglambda/llama.cpp that referenced this pull request Mar 8, 2025
…l-org#11166)

* vulkan: support copy from f32 to q4_0/q4_1/q5_0/q5_1/q8_0/iq4_nl

Shaders are based on cpy.cu.

* vulkan: support copy from q4_0/q4_1/q5_0/q5_1/q8_0/iq4_nl to f32

* ggml: copy q->f32 assumes some contiguity in the destination
@stduhpf
Copy link
Contributor

stduhpf commented Mar 20, 2025

There's a small problem I noticed regarding the CPY op for q4_1 (and q4_0 ?). It doesn't have a noticable effect when running inference, but when running test-backend-ops, I sometime get failures with NSME barely above the threshold.
Here are the outputs of a few test-backend-ops runs, filtered for failures:

ggml_vulkan: Found 2 Vulkan devices:
ggml_vulkan: 0 = AMD Radeon RX 6800 (AMD proprietary driver) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 32768 | matrix cores: none
ggml_vulkan: 1 = AMD Radeon RX 5700 XT (AMD proprietary driver) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 32768 | matrix cores: none
  CPY(type_src=f32,type_dst=q4_1,ne=[256,4,4,4],permute=[0,0,0,0]): [CPY] NMSE = 0.000001012 > 0.000001000 FAIL
  CPY(type_src=f32,type_dst=q4_1,ne=[256,2,3,4],permute=[0,2,1,3]): [CPY] NMSE = 0.000001093 > 0.000001000 FAIL
  Backend Vulkan0: FAIL
  CPY(type_src=f32,type_dst=q4_0,ne=[256,2,3,4],permute=[0,2,1,3]): [CPY] NMSE = 0.000007574 > 0.000001000 FAIL
  CPY(type_src=f32,type_dst=q4_1,ne=[256,4,4,4],permute=[0,0,0,0]): [CPY] NMSE = 0.000001096 > 0.000001000 FAIL
  CPY(type_src=f32,type_dst=q4_1,ne=[256,2,3,4],permute=[0,2,1,3]): [CPY] NMSE = 0.000001126 > 0.000001000 FAIL
  Backend Vulkan1: FAIL
FAIL
  CPY(type_src=f32,type_dst=q4_1,ne=[256,2,3,4],permute=[0,2,1,3]): [CPY] NMSE = 0.000001085 > 0.000001000 FAIL
  Backend Vulkan0: FAIL
  CPY(type_src=f32,type_dst=q4_1,ne=[256,4,4,4],permute=[0,0,0,0]): [CPY] NMSE = 0.000001035 > 0.000001000 FAIL
  Backend Vulkan1: FAIL
FAIL
  CPY(type_src=f32,type_dst=q4_1,ne=[256,4,4,4],permute=[0,0,0,0]): [CPY] NMSE = 0.000001088 > 0.000001000 FAIL
  CPY(type_src=f32,type_dst=q4_1,ne=[256,2,3,4],permute=[0,2,1,3]): [CPY] NMSE = 0.000001006 > 0.000001000 FAIL
  Backend Vulkan0: FAIL
FAIL
  CPY(type_src=f32,type_dst=q4_1,ne=[256,4,4,4],permute=[0,0,0,0]): [CPY] NMSE = 0.000001005 > 0.000001000 FAIL
  Backend Vulkan0: FAIL
  CPY(type_src=f32,type_dst=q4_1,ne=[256,2,3,4],permute=[0,2,1,3]): [CPY] NMSE = 0.000001033 > 0.000001000 FAIL
  Backend Vulkan1: FAIL
FAIL

@0cc4m
Copy link
Collaborator

0cc4m commented Mar 20, 2025

That kind of issue can happen due to small numerical differences between implementations and devices, in combination with the randomly-generated test data. See for example #11972

@jeffbolznv
Copy link
Collaborator Author

Randomness is a possibility, but i haven't seen it happen for these tests myself. I wonder if amd could be using a different rounding mode?

@stduhpf
Copy link
Contributor

stduhpf commented Mar 20, 2025

Yes it's probably a rounding error. I wonder if it also happens with the other quants, but the error is just too small to exceed the threshold.

@jeffbolznv
Copy link
Collaborator Author

Can you try adding spirv_execution_mode(capabilities = [4467], 4462, 16); // RoundingModeRTE, 16 bits in the shader?

@stduhpf
Copy link
Contributor

stduhpf commented Mar 20, 2025

Ok I managed to get it to compile (it requires #extension GL_EXT_spirv_intrinsics : enable)

It does fix the failure in test-backend-ops, nice find.

@jeffbolznv
Copy link
Collaborator Author

Cool. Do you want to make the full fix (IIRC we need to compile two separate versions)? If not, I can try it soon.

@stduhpf
Copy link
Contributor

stduhpf commented Mar 20, 2025

IIRC we need to compile two separate versions

I'm not sure what you mean by that.

@jeffbolznv
Copy link
Collaborator Author

The shader source needs to have the #if RTE16, vulkan-shaders-gen.cpp compiles two versions, with and without RTE16, and then ggml-vulkan.cpp selects which one to use based on float_controls_rte_fp16.

@stduhpf
Copy link
Contributor

stduhpf commented Mar 20, 2025

#if RTE16
#extension GL_EXT_spirv_intrinsics : enable
spirv_execution_mode(capabilities = [4467], 4462, 16); // RoundingModeRTE, 16 bits
#endif // RTE16

Something like this?

@jeffbolznv
Copy link
Collaborator Author

For the shader, yes, but the other two parts are still necessary because not all implementations necessarily support these rounding modes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ggml changes relating to the ggml tensor library for machine learning testing Everything test related Vulkan Issues specific to the Vulkan backend
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants