Skip to content

Commit 1eb111a

Browse files
authored
Turn -funsafe-math-optimizations optional.
1 parent f9da392 commit 1eb111a

File tree

1 file changed

+8
-2
lines changed

1 file changed

+8
-2
lines changed

README.md

+8-2
Original file line numberDiff line numberDiff line change
@@ -19,10 +19,10 @@ Let's just run a baby Llama 2 model in C. You need a model checkpoint. Download
1919
wget https://karpathy.ai/llama2c/model.bin -P out
2020
```
2121

22-
(if that doesn't work try [google drive](https://drive.google.com/file/d/1aTimLdx3JktDXxcHySNrZJOOk8Vb1qBR/view?usp=share_link)). Compile and run the C code:
22+
(if that doesn't work try [google drive](https://drive.google.com/file/d/1aTimLdx3JktDXxcHySNrZJOOk8Vb1qBR/view?usp=share_link)). Compile and run the C code (check [howto](#howto) for faster optimization flags):
2323

2424
```bash
25-
gcc -O3 -funsafe-math-optimizations -o run run.c -lm
25+
gcc -O3 -o run run.c -lm
2626
./run out/model.bin
2727
```
2828

@@ -64,6 +64,12 @@ wget https://karpathy.ai/llama2c/model.bin -P out
6464

6565
Once we have the model.bin file, we can inference in C. Compile the C code first:
6666

67+
```bash
68+
gcc -O3 -o run run.c -lm
69+
```
70+
71+
Alternatively, if you want to increase the inference performance and are confident in using unsafe math optimizations, which are probably fine for this application, you can compile the code with the `-funsafe-math-optimizations` flag as shown below:
72+
6773
```bash
6874
gcc -O3 -funsafe-math-optimizations -o run run.c -lm
6975
```

0 commit comments

Comments
 (0)