You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
llama.cpp/examples
Ivan Komarov c12b14b77f
benchmark : fix result validation in benchmark-q4_0-matmult (#987)
1 year ago
..
benchmark benchmark : fix result validation in benchmark-q4_0-matmult (#987) 1 year ago
embedding Fix whitespace, add .editorconfig, add GitHub workflow (#883) 1 year ago
main Revert "main : alternative instruct mode (Vicuna support, etc.) (#863)" (#982) 1 year ago
perplexity perplexity : add support for batch size to `--perplexity` (#407) 1 year ago
quantize Add enum llama_ftype, sync ggml_type to model files (#709) 1 year ago
quantize-stats Expose type name from ggml (#970) 1 year ago
CMakeLists.txt Add quantize-stats command for testing quantization (#728) 1 year ago
Miku.sh Fix whitespace, add .editorconfig, add GitHub workflow (#883) 1 year ago
alpaca.sh examples : add -n to alpaca and gpt4all scripts (#706) 1 year ago
chat-13B.bat Create chat-13B.bat (#592) 1 year ago
chat-13B.sh Move chat scripts into "./examples" 1 year ago
chat.sh If n_predict == -1, generate forever 1 year ago
common.cpp Revert "main : alternative instruct mode (Vicuna support, etc.) (#863)" (#982) 1 year ago
common.h Revert "main : alternative instruct mode (Vicuna support, etc.) (#863)" (#982) 1 year ago
gpt4all.sh examples : add -n to alpaca and gpt4all scripts (#706) 1 year ago
reason-act.sh add example of re-act pattern (#583) 1 year ago