You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
llama.cpp/examples
Gary Linscott be87b6ed20
perplexity : add support for batch size to `--perplexity` (#407)
* Add support to batch size for perplexity

* Revert "Fix memory allocation issues and seg faults"

This reverts commit 4870e455b3.

* update from merge

* Remove perplexity from main

* updates

* Update batch size for efficiency
2 years ago
..
benchmark fix whitespace (#944) 2 years ago
embedding Fix whitespace, add .editorconfig, add GitHub workflow (#883) 2 years ago
main Fix whitespace, add .editorconfig, add GitHub workflow (#883) 2 years ago
perplexity perplexity : add support for batch size to `--perplexity` (#407) 2 years ago
quantize Add enum llama_ftype, sync ggml_type to model files (#709) 2 years ago
quantize-stats llama : merge llama_internal.h into llama.h 2 years ago
CMakeLists.txt Add quantize-stats command for testing quantization (#728) 2 years ago
Miku.sh Fix whitespace, add .editorconfig, add GitHub workflow (#883) 2 years ago
alpaca.sh examples : add -n to alpaca and gpt4all scripts (#706) 2 years ago
chat-13B.bat Create chat-13B.bat (#592) 2 years ago
chat-13B.sh Move chat scripts into "./examples" 2 years ago
chat.sh If n_predict == -1, generate forever 2 years ago
common.cpp common : remove unnecessary includes (#947) 2 years ago
common.h Rewrite loading code to try to satisfy everyone: 2 years ago
gpt4all.sh examples : add -n to alpaca and gpt4all scripts (#706) 2 years ago
reason-act.sh add example of re-act pattern (#583) 2 years ago