Skip to content

Commit 872ea1a

Browse files
ggerganovarthw
authored andcommitted
server : enable cache_prompt by default (ggml-org#10501)
ggml-ci
1 parent c869ec3 commit 872ea1a

File tree

2 files changed

+3
-3
lines changed

2 files changed

+3
-3
lines changed

examples/server/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -412,7 +412,7 @@ node index.js
412412

413413
`id_slot`: Assign the completion task to an specific slot. If is -1 the task will be assigned to a Idle slot. Default: `-1`
414414

415-
`cache_prompt`: Re-use KV cache from a previous request if possible. This way the common prefix does not have to be re-processed, only the suffix that differs between the requests. Because (depending on the backend) the logits are **not** guaranteed to be bit-for-bit identical for different batch sizes (prompt processing vs. token generation) enabling this option can cause nondeterministic results. Default: `false`
415+
`cache_prompt`: Re-use KV cache from a previous request if possible. This way the common prefix does not have to be re-processed, only the suffix that differs between the requests. Because (depending on the backend) the logits are **not** guaranteed to be bit-for-bit identical for different batch sizes (prompt processing vs. token generation) enabling this option can cause nondeterministic results. Default: `true`
416416

417417
`samplers`: The order the samplers should be applied in. An array of strings representing sampler type names. If a sampler is not set, it will not be used. If a sampler is specified more than once, it will be applied multiple times. Default: `["dry", "top_k", "typ_p", "top_p", "min_p", "xtc", "temperature"]` - these are all the available values.
418418

examples/server/server.cpp

+2-2
Original file line numberDiff line numberDiff line change
@@ -111,7 +111,7 @@ struct server_static_file {
111111

112112
struct slot_params {
113113
bool stream = true;
114-
bool cache_prompt = false; // remember the prompt to avoid reprocessing all prompt
114+
bool cache_prompt = true; // remember the prompt to avoid reprocessing all prompt
115115

116116
int32_t n_keep = 0; // number of tokens to keep from initial prompt
117117
int32_t n_discard = 0; // number of tokens after n_keep that may be discarded when shifting context, 0 defaults to half
@@ -883,7 +883,7 @@ struct server_context {
883883
}
884884

885885
slot.params.stream = json_value(data, "stream", false);
886-
slot.params.cache_prompt = json_value(data, "cache_prompt", false);
886+
slot.params.cache_prompt = json_value(data, "cache_prompt", true);
887887
slot.params.n_predict = json_value(data, "n_predict", json_value(data, "max_tokens", defaults.n_predict));
888888
slot.params.n_indent = json_value(data, "n_indent", defaults.n_indent);
889889
slot.params.n_keep = json_value(data, "n_keep", defaults.n_keep);

0 commit comments

Comments
 (0)