Skip to content

Error converting gemma-1.1-7b-it to gguf. #7964

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
0wwafa opened this issue Jun 16, 2024 · 6 comments
Closed

Error converting gemma-1.1-7b-it to gguf. #7964

0wwafa opened this issue Jun 16, 2024 · 6 comments

Comments

@0wwafa
Copy link

0wwafa commented Jun 16, 2024

Model: google/gemma-1.1-7b-it

python llama.cpp/convert-hf-to-gguf.py --outtype f16 /content/gemma-1.1-7b-it --outfile /content/gemma-1.1-7b-it.f16.gguf

INFO:hf-to-gguf:Loading model: gemma-1.1-7b-it
INFO:gguf.gguf_writer:gguf: This GGUF file is for Little Endian only
INFO:hf-to-gguf:Set model parameters
INFO:hf-to-gguf:Set model tokenizer
INFO:gguf.vocab:Setting special token type bos to 2
INFO:gguf.vocab:Setting special token type eos to 1
INFO:gguf.vocab:Setting special token type unk to 3
INFO:gguf.vocab:Setting special token type pad to 0
INFO:gguf.vocab:Setting add_bos_token to True
INFO:gguf.vocab:Setting add_eos_token to False
INFO:gguf.vocab:Setting chat_template to {{ bos_token }}{% if messages[0]['role'] == 'system' %}{{ raise_exception('System role not supported') }}{% endif %}{% for message in messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if (message['role'] == 'assistant') %}{% set role = 'model' %}{% else %}{% set role = message['role'] %}{% endif %}{{ '<start_of_turn>' + role + '
' + message['content'] | trim + '<end_of_turn>
' }}{% endfor %}{% if add_generation_prompt %}{{'<start_of_turn>model
'}}{% endif %}
INFO:gguf.vocab:Setting special token type prefix to 67
INFO:gguf.vocab:Setting special token type suffix to 69
INFO:gguf.vocab:Setting special token type middle to 68
WARNING:gguf.vocab:No handler for special token type fsep with id 70 - skipping
INFO:gguf.vocab:Setting special token type eot to 107
INFO:gguf.vocab:Setting chat_template to {{ bos_token }}{% if messages[0]['role'] == 'system' %}{{ raise_exception('System role not supported') }}{% endif %}{% for message in messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if (message['role'] == 'assistant') %}{% set role = 'model' %}{% else %}{% set role = message['role'] %}{% endif %}{{ '<start_of_turn>' + role + '
' + message['content'] | trim + '<end_of_turn>
' }}{% endfor %}{% if add_generation_prompt %}{{'<start_of_turn>model
'}}{% endif %}
Traceback (most recent call last):
  File "/content/llama.cpp/convert-hf-to-gguf.py", line 2881, in <module>
    main()
  File "/content/llama.cpp/convert-hf-to-gguf.py", line 2866, in main
    model_instance.set_vocab()
  File "/content/llama.cpp/convert-hf-to-gguf.py", line 2250, in set_vocab
    special_vocab.add_to_gguf(self.gguf_writer)
  File "/content/llama.cpp/gguf-py/gguf/vocab.py", line 73, in add_to_gguf
    gw.add_chat_template(self.chat_template)
  File "/content/llama.cpp/gguf-py/gguf/gguf_writer.py", line 565, in add_chat_template
    self.add_string(Keys.Tokenizer.CHAT_TEMPLATE, value)
  File "/content/llama.cpp/gguf-py/gguf/gguf_writer.py", line 206, in add_string
    self.add_key_value(key, val, GGUFValueType.STRING)
  File "/content/llama.cpp/gguf-py/gguf/gguf_writer.py", line 166, in add_key_value
    raise ValueError(f'Duplicated key name {key!r}')
ValueError: Duplicated key name 'tokenizer.chat_template'
@0wwafa
Copy link
Author

0wwafa commented Jun 16, 2024

For now I commented the line raising the exception:

def add_key_value(self, key: str, val: Any, vtype: GGUFValueType) -> None:
    #if key in self.kv_data:
    #    raise ValueError(f'Duplicated key name {key!r}')

@Galunid
Copy link
Collaborator

Galunid commented Jun 16, 2024

@Galunid
Copy link
Collaborator

Galunid commented Jun 29, 2024

Problem no longer occurs on master.

@Galunid Galunid closed this as completed Jun 29, 2024
@0xmashallah
Copy link

I'm still having the same issue unfortunately :(

@0wwafa
Copy link
Author

0wwafa commented Jun 30, 2024

I'm still having the same issue unfortunately :(

did you try gemma-2-9b-it ?
I converted it and quantized it. You can find it here: https://huggingface.co/ZeroWw/gemma-2-9b-it-GGUF

@fifiand1
Copy link

I'm facing the same issue with convert_hf_to_gguf on the main branch for codellama/CodeLlama-7b-Instruct-hf.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants