Commit Graph

28 Commits (a017390358cdb23fffb30988dc84bb190d0403ca)

Author SHA1 Message Date
Slaren a017390358 Initial windows support (untested) 1 year ago
Slaren ac184d5147 Always initialize mm_addr and mm_length in llama_model 1 year ago
Slaren 276e5b7811 Unmap the file in llama_free 1 year ago
Slaren d68c5dc435 Make mmap_file static 1 year ago
Slaren 64bde3ffd4 Fix ggml_init_params in quantize 1 year ago
Slaren c03ae8dca1 Add mmap support for model files 1 year ago
Georgi Gerganov 0ba76c1e73
llama : fix compile warnings when reading the vocab 1 year ago
Maël Kerbiriou 41318d708e
llama : use the same threshold for OpenBLAS and ggml thread limiting (#577) 1 year ago
thement d0aaff571c
py : add temporary script to convert old ggml files to newer version (#539)
Co-authored-by: Jakub Horak <jakub.horak@ibawizard.net>
1 year ago
Stephan Walter 436e561931
all : be more strict about converting float to double (#458)
* Be more strict about converting float to double

* Test equivalence of round, SILU implementations

Test module is commented out in CMakeLists.txt because the tests may
take a long time, depending on how much the compiler optimizes.

* Fix softmax in perplexity.cpp

* all : prefer float over double where appropriate

* perplexity : add <cmath>

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
1 year ago
Stephan Walter c1f885067c
ggml : introduce structs for the q4 data blocks (#356)
* Introduce structs for the q4 data blocks

* ggml : rename quant struct variables + fix ARM_NEON

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
1 year ago
Georgi Gerganov 03f7e33560
Cleanup STL headers + fix embedding examples + minor stuff 1 year ago
Georgi Gerganov 4640eff23d
Don't interefe with BLAS for large prompts by running only 1 thread 1 year ago
slaren 29b7baab67
Add timings for the prompt evaluation (#478) 1 year ago
Georgi Gerganov 2a2e63ce05
Fix nasty bug in ggml_compute_forward_mul_mat_f32() and reenable BLAS 1 year ago
Jed Fox 58e6c9f36f
Add support for file load progress reporting callbacks (#434)
* File load progress reporting

* Move llama_progress_handler into llama_context_params

* Renames

* Use seekg to find file size instead

* More correct load progress

* Call progress callback more frequently

* Fix typo
1 year ago
Chris Kuehl 6f1ee4b640
Fix crash for 65B model with pre-allocated memory (#485) 1 year ago
Georgi Gerganov 7a9b6c3a8b
Reduce memory usage and allocate enough memory for largest context (#473)
* Reduce memory usage and allocate enough memory for large contexts

* Simpler scratch buffer usage

* Reenable BLAS for quantized mul_mat

* Fix number of layers in 30B and 65B

* Fix KV cache size for F32
1 year ago
Georgi Gerganov 31572d9665
Temporary bump the memory buffer size - hopefully fix issues from 483bab2e 1 year ago
Georgi Gerganov afd220d9c6
Properly free llama_context on failure 1 year ago
comex 563cdc391d
Support calling mlock() on loaded model data on Linux and macOS (#453)
* Support calling mlock() on loaded model data on Linux and macOS

This is enabled by a new --mlock command line option.

Using mlock() disables swapping and memory compression for the model
data.  Doing so can be useful on systems where the model takes up a
large fraction of system RAM.  In my experience, macOS is quite eager to
start compressing llama.cpp's memory, which then makes it halt for a few
seconds while it decompresses, even with a model that uses "only" 25GB
out of 32GB.

Of course, this comes at the cost of forcing the system to swap or
compress other processes' memory instead, so it needs to be used with
care and shouldn't be enabled by default.

In theory it should be possible to support this on Windows as well using
VirtualLock(), but I'm not much of a Windows user.

* Update llama.cpp

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
1 year ago
Luciano 8d4a855c24
Add embedding mode with arg flag. Currently working (#282)
* working but ugly

* add arg flag, not working on embedding mode

* typo

* Working! Thanks to @nullhook

* make params argument instead of hardcoded boolean. remove useless time check

* start doing the instructions but not finished. This probably doesnt compile

* Embeddings extraction support

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
1 year ago
Georgi Gerganov 3cd8dde0d1 Revert "Fix memory allocation issues and seg faults"
This reverts commit 4870e455b3.

Will provide the correct fix later
1 year ago
Georgi Gerganov 4870e455b3
Fix memory allocation issues and seg faults 1 year ago
Georgi Gerganov 483bab2e3d
Avoid the transposed X branch in the Z = X * Y matrix multiplication (#439)
Should make results reproducible for different number of threads and batch sizes
1 year ago
Yusuf Kağan Hanoğlu d5850c53ca
Add missing header for memcpy (#386)
fixed: memcpy is not defined
1 year ago
Georgi Gerganov 928480ef5b
Init llama_context_params properly from CLI (#370) 1 year ago
Georgi Gerganov f5a77a629b
Introduce C-style API (#370)
* Major refactoring - introduce C-style API

* Clean up

* Add <cassert>

* Add <iterator>

* Add <algorithm> ....

* Fix timing reporting and accumulation

* Measure eval time only for single-token calls

* Change llama_tokenize return meaning
1 year ago