You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* feat: Functionary v3 support
* feat: Mistral chat wrapper
* feat: move `seed` option to the prompt level
* feat: make `LlamaEmbedding` an object
* feat: `HF_TOKEN` support for reading GGUF file metadata
* feat: `inspect estimate` command
* feat(`TemplateChatWrapper`): custom history template for each message role
* feat: extract all prebuilt binaries to external modules
* feat: more helpful `inspect gpu` command
* feat: combine model downloaders
* feat: simplify `TokenBias`
* feat: simplify compatibility detection
* feat: better `threads` default value
* feat: improve Llama 3.1 chat template detection
* feat: `--gpuLayers max` and `--contextSize max` flag support for `inspect estimate` command
* feat: iterate all tokenizer tokens
* feat: failed context creation automatic remedy
* feat: abort generation in CLI commands
* feat(electron example template): update badge, scroll anchoring, table support
* refactor: move `download`, `build` and `clear` commands to be subcommands of a `source` command
* docs: new docs
* fix: adapt to `llama.cpp` sampling refactor
* fix: Llama 3.1 chat wrapper standard chat history
* fix: Llama 3 Instruct function calling
* fix: don't preload prompt in the `chat` command when using `--printTimings` or `--meter`
* fix: change `autoDisposeSequence` default to `false`
* fix: more stable Jinja template matching
* fix: improve performance of parallel evaluation from multiple contexts
* build(CI): resolve next version before release, update documentation website without a release
* build: only create a GitHub discussion on major or minor releases
* chore: update models list
* chore: remove unused field from `TokenMeter`
Copy file name to clipboardExpand all lines: .github/PULL_REQUEST_TEMPLATE.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -29,4 +29,4 @@
29
29
-[ ] This pull request links relevant issues as `Fixes #0000`
30
30
-[ ] There are new or updated unit tests validating the change
31
31
-[ ] Documentation has been updated to reflect this change
32
-
-[ ] The new commits and pull request title follow conventions explained in [pull request guidelines](https://withcatai.github.io/node-llama-cpp/guide/contributing) (PRs that do not follow this convention will not be merged)
32
+
-[ ] The new commits and pull request title follow conventions explained in [pull request guidelines](https://node-llama-cpp.withcat.ai/guide/contributing) (PRs that do not follow this convention will not be merged)
0 commit comments