Commit Graph

343 Commits (723dac55fa2ba7adc6e3fc8609781d1ad0378906)
 

Author SHA1 Message Date
Georgi Gerganov 6b6dbc8910
Remove obsolete assert and fix compiler warning 1 year ago
Georgi Gerganov 2a2e63ce05
Fix nasty bug in ggml_compute_forward_mul_mat_f32() and reenable BLAS 1 year ago
anzz1 e899bf54b2
bounds checking for input prefix (#492) 1 year ago
anzz1 fbd4d38c64
feat: '--in-prefix STRING' option (#426)
Prefix user inputs with a string
1 year ago
Jed Fox 58e6c9f36f
Add support for file load progress reporting callbacks (#434)
* File load progress reporting

* Move llama_progress_handler into llama_context_params

* Renames

* Use seekg to find file size instead

* More correct load progress

* Call progress callback more frequently

* Fix typo
1 year ago
Doomsdayrs 36d07532ef
Add missing struct annotation (#483)
`llama_sample_top_p_top_k` was missing the struct annotation on line 126.

This causes a compiler issue when being parsed by the Kotlin C interop generator.

This commit fixes the above issue by adding the struct annotation.
1 year ago
Chris Kuehl 6f1ee4b640
Fix crash for 65B model with pre-allocated memory (#485) 1 year ago
Georgi Gerganov 8520fc310e
Disable BLAS altogether - the bug is not just for qunatized mat mul 1 year ago
Georgi Gerganov b3f460e941
Disable BLAS branch in mul_mat - seems there is a bug 1 year ago
Georgi Gerganov 04c6f5ed6f
Immediately start processing the prompt before user input has been provided (#476) 1 year ago
Georgi Gerganov 7a9b6c3a8b
Reduce memory usage and allocate enough memory for largest context (#473)
* Reduce memory usage and allocate enough memory for large contexts

* Simpler scratch buffer usage

* Reenable BLAS for quantized mul_mat

* Fix number of layers in 30B and 65B

* Fix KV cache size for F32
1 year ago
Georgi Gerganov 31572d9665
Temporary bump the memory buffer size - hopefully fix issues from 483bab2e 1 year ago
Gary Mulder f4f5362edb
Update README.md (#444)
Added explicit **bolded** instructions clarifying that people need to request access to models from Facebook and never through through this repo.
1 year ago
rabidcopy 863f65e2e3
fix instruct mode (#445)
changes to EOS behavior in interactive and reverse prompt handling broke instruct mode by erroneously injecting instruct mode's reverse prompt and an extra newline.
1 year ago
Georgi Gerganov afd220d9c6
Properly free llama_context on failure 1 year ago
Cameron Kaiser 481044d50c
additional optimizations for POWER9 (#454) 1 year ago
comex 563cdc391d
Support calling mlock() on loaded model data on Linux and macOS (#453)
* Support calling mlock() on loaded model data on Linux and macOS

This is enabled by a new --mlock command line option.

Using mlock() disables swapping and memory compression for the model
data.  Doing so can be useful on systems where the model takes up a
large fraction of system RAM.  In my experience, macOS is quite eager to
start compressing llama.cpp's memory, which then makes it halt for a few
seconds while it decompresses, even with a model that uses "only" 25GB
out of 32GB.

Of course, this comes at the cost of forcing the system to swap or
compress other processes' memory instead, so it needs to be used with
care and shouldn't be enabled by default.

In theory it should be possible to support this on Windows as well using
VirtualLock(), but I'm not much of a Windows user.

* Update llama.cpp

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
1 year ago
Luciano 8d4a855c24
Add embedding mode with arg flag. Currently working (#282)
* working but ugly

* add arg flag, not working on embedding mode

* typo

* Working! Thanks to @nullhook

* make params argument instead of hardcoded boolean. remove useless time check

* start doing the instructions but not finished. This probably doesnt compile

* Embeddings extraction support

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
1 year ago
Georgi Gerganov b6b268d441
Add link to Roadmap discussion 1 year ago
Georgi Gerganov 3cd8dde0d1 Revert "Fix memory allocation issues and seg faults"
This reverts commit 4870e455b3.

Will provide the correct fix later
1 year ago
Georgi Gerganov 4870e455b3
Fix memory allocation issues and seg faults 1 year ago
Georgi Gerganov 483bab2e3d
Avoid the transposed X branch in the Z = X * Y matrix multiplication (#439)
Should make results reproducible for different number of threads and batch sizes
1 year ago
Jed Fox 404e1da38e
Fix quantize script not finding models in parent directory (#428) 1 year ago
Georgi Gerganov 4cc053b6d5
Remove oboslete command from Docker script 1 year ago
Georgi Gerganov 0ba5a3a9a5
Obsolete 1 year ago
rabidcopy 2e17dfd80a
Replace EOS with newline to prevent context/memory being flushed by EOS in interactive mode (#333)
* Improve interactive mode's coherence after EOS

Aims to improve coherence and ability to resume the interactive session when the user is given input back after an end of text token is reached.
Not sure what token 13 is or why it seems to help. See conversation for examples.

* Make newline token a constant

* dynamically determine newline token

* relocate previous newline token const

* cleanup whitespace

* print a new line on end of text in interactive

this may need to be looked into further when not using a reverse prompt

* only print manual newline with reverse prompt

fix formatting of reverse prompts so they don't end up at the end of the current line while not introducing unnecessary new lines otherwise

* alternate approach to replace end of text tokens

* Inject the reverse prompt again after eos in interactive mode

* tokenize reverse prompt when needed

makes this PR compatible with https://github.com/ggerganov/llama.cpp/pull/330

* tokenize and inject only first reverse prompt

thanks to tjohnman

* tokenize first reverse prompt once

* add newline token

* add newline token

* tokenize/inject reverse prompt for refactor

this doesn't seem right though

* tokenize nothing for antiprompt if no reverse

* Update main.cpp

* Update main.cpp

* tokenize and inject reverse prompt as needed

this doesn't seem to work if the reverse prompt is tokenized outside earlier on

* not needed

* remove newline token

* remove newline token

* tokenize newline token

* add space to comment

* Update main.cpp

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

---------

Co-authored-by: Slaren <2141330+slaren@users.noreply.github.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
1 year ago
Timmy Knight 20a1a4e09c
Fix GPTQ converter (#423)
* Fix GPTQ converter

* Fix comment

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
1 year ago
nusu-github ad072fc5ad
Generate library with CMake (#430)
* Generate library with CMake

BUILD_SHARED_LIBS to allow llama library to be generated.

* Turn ON PIC when BUILD_SHARED_LIBS is ON
1 year ago
anzz1 ea10d3ded2
Command line args bounds checking (#424)
* command line args bounds checking

* unknown and invalid param exit codes 0 -> 1
1 year ago
Ben Siraphob a18c19259a Fix Nix build 1 year ago
Stephan Walter a50e39c6fe
Revert "Delete SHA256SUMS for now" (#429)
* Revert "Delete SHA256SUMS for now (#416)"

This reverts commit 8eea5ae0e5.

* Remove ggml files until they can be verified
* Remove alpaca json
* Add also model/tokenizer.model to SHA256SUMS + update README

---------

Co-authored-by: Pavol Rusnak <pavol@rusnak.io>
1 year ago
Kerfuffle a140219e81
Fix Makefile echo escape codes (by removing them). (#418) 1 year ago
Gary Mulder 8a3e5ef801
Move model section from issue template to README.md (#421)
* Update custom.md

* Removed Model section as it is better placed in README.md

* Updates to README.md model section

* Inserted text that was removed from  issue template about obtaining models from FB and links to papers describing the various models

* Removed IPF down links for the Alpaca 7B models as these look to be in the old data format and probably shouldn't be directly linked to, anyway

* Updated the perplexity section to point at Perplexity scores #406 discussion
1 year ago
anzz1 8eea5ae0e5
Delete SHA256SUMS for now (#416)
Delete this for now to avoid confusion since it contains some wrong checksums from the old tokenizer format
Re-add after #374 is resolved
1 year ago
Georgi Gerganov 93208cfb92
Adjust repetition penalty .. 1 year ago
Georgi Gerganov 03ace14cfd
Add link to recent podcast about whisper.cpp and llama.cpp 1 year ago
anzz1 e4412b45e3
CI: CMake: Separate build and test steps (#376)
* CI: Separate Build and Test steps (CMake)

* CI: Make sure build passes before running tests (CMake)

* CI: Standardise step id names
1 year ago
tjohnman f7dc43bc0d
Fix instruct mode broken by PR #354 (#409)
Co-authored-by: Johnman <tjohnman@github>
1 year ago
Gary Mulder ee8a788786
Update issue template so people will use it (#404) 1 year ago
Stephan Walter 69c92298a9
Deduplicate q4 quantization functions (#383)
* Deduplicate q4 quantization functions

* Use const; add basic test

* Re-enable quantization test

* Disable AVX2 flags in CI

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
1 year ago
Valentyn Bezshapkin 97940520e8
fix: add POSIX functionality for Linux compilation (#51)
* fix: add POSIX functionality for Linux compilation

* fix: older standard for compatibility
1 year ago
tjohnman 305ba6f0e6
Don't force immediate interactive without `-i` (#354)
* Don't force immediate interactive without -i

Sometimes we might want to use a reverse prompt but we want to let the
model generate tokens right after the initial prompt. So we don't force
user input mode if the -i flag wasn't specified and instead let it run
until we encounter the reverse prompt.

This gives use some more flexibility, since it doesn't force the user to
enter a newline if they want to let the model generate text right after
the initial prompt and only be asked for input if the reverse prompt is
encountered.

The `--interactive-first` flag is reintroduced to force the old
behavior. `-r` behaves like `-i` plus introduces a reverse prompt (it
can be specified more than once).

* Update help output.

---------

Co-authored-by: Johnman <tjohnman@github>
1 year ago
Erik Scholz 4122dffff9
cmake: make llama an actual library (#392) 1 year ago
Erik Scholz 56e659a0b2
fix perplexity after c-api refactor (#390)
* preallocate a buffer of fitting size for tokenization (utils.cpp)

* don't create a new std::string (especially here, where it's usually large)
1 year ago
Gary Linscott 40ea807a97
Add details on perplexity to README.md (#395) 1 year ago
Yusuf Kağan Hanoğlu d5850c53ca
Add missing header for memcpy (#386)
fixed: memcpy is not defined
1 year ago
Georgi Gerganov ae44e23ee3
When seed <= 0 - use the clock to generate one 1 year ago
Georgi Gerganov 928480ef5b
Init llama_context_params properly from CLI (#370) 1 year ago
Georgi Gerganov 56817b1f88
Remove temporary notice and update hot topics 1 year ago
Georgi Gerganov f5a77a629b
Introduce C-style API (#370)
* Major refactoring - introduce C-style API

* Clean up

* Add <cassert>

* Add <iterator>

* Add <algorithm> ....

* Fix timing reporting and accumulation

* Measure eval time only for single-token calls

* Change llama_tokenize return meaning
1 year ago