llama.cpp/tests
Didzis Gosko 527b6fba1d
llama : make model stateless and context stateful (llama_state) (#1797)
* llama : make model stateless and context stateful

* llama : minor cleanup

* llama : update internal API declaration

* Apply suggestions from code review

fix style

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Missing model memory release

* Fix style

* Add deprecated warning for public API function llama_init_from_file

* Update public API use cases: move away from deprecated llama_init_from_file

* Deprecate public API function llama_apply_lora_from_file

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-06-24 11:47:58 +03:00
..
CMakeLists.txt ggml : implement backward pass for llama + small training-llama-from-scratch example (#1360) 2023-05-13 15:56:40 +03:00
test-double-float.c all : be more strict about converting float to double (#458) 2023-03-28 19:48:20 +03:00
test-grad0.c train : improved training-from-scratch example (#1652) 2023-06-13 22:04:40 +03:00
test-opt.c ggml : implement backward pass for llama + small training-llama-from-scratch example (#1360) 2023-05-13 15:56:40 +03:00
test-quantize-fns.cpp build : fix and ignore MSVC warnings (#1889) 2023-06-16 21:23:53 +03:00
test-quantize-perf.cpp build : fix and ignore MSVC warnings (#1889) 2023-06-16 21:23:53 +03:00
test-sampling.cpp build : fix and ignore MSVC warnings (#1889) 2023-06-16 21:23:53 +03:00
test-tokenizer-0.cpp llama : make model stateless and context stateful (llama_state) (#1797) 2023-06-24 11:47:58 +03:00