Skip to content

Commit a7d6214

Browse files
committed
New conversion script (#545)
1 parent 5b70e7d commit a7d6214

16 files changed

+1075
-1305
lines changed

README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -150,10 +150,10 @@ ls ./models
150150
65B 30B 13B 7B tokenizer_checklist.chk tokenizer.model
151151

152152
# install Python dependencies
153-
python3 -m pip install torch numpy sentencepiece
153+
python3 -m pip install -r requirements.txt
154154

155155
# convert the 7B model to ggml FP16 format
156-
python3 convert-pth-to-ggml.py models/7B/ 1
156+
python3 convert.py models/7B/
157157

158158
# quantize the model to 4-bits (using method 2 = q4_0)
159159
./quantize ./models/7B/ggml-model-f16.bin ./models/7B/ggml-model-q4_0.bin 2

convert-ggml-to-pth.py

-299
This file was deleted.

0 commit comments

Comments
 (0)