llama.cpp/tests
slaren 16bc66d947
llama.cpp : split llama_context_params into model and context params (#3301)
* llama.cpp : split llama_context_params into model and context params

ggml-ci

* fix metal build

* fix freq_base/scale default to model value

* llama-bench : keep the same model between tests when possible

* move n_threads to llama_context_params, add n_threads_batch

* fix mpi build

* remove kv_size(), cuda scratch fixes

* remove low-vram option

* add n_threads_batch to system info, refactor to get_system_info()

* add documentation about --threads-batch to the READMEs

* llama-bench fix

* main : fix rope freq/scale warning

* llama.cpp : add llama_get_model
common : add llama_tokenize from model

* remove duplicated ctx/model functions

ggml-ci

* cuda : print total VRAM used
2023-09-28 22:42:38 +03:00
..
CMakeLists.txt llama : custom attention mask + parallel decoding + no context swaps (#3228) 2023-09-28 19:04:36 +03:00
test-c.c tests : add a C compliance test (#2848) 2023-08-30 09:20:26 +03:00
test-double-float.cpp tests : Fix compilation warnings (Linux/GCC) (#2451) 2023-08-02 11:06:19 +03:00
test-grad0.cpp train : finetune LORA (#2632) 2023-09-28 21:40:11 +03:00
test-grammar-parser.cpp gguf : new file format with flexible meta data (beta) (#2398) 2023-08-21 23:07:43 +03:00
test-llama-grammar.cpp gguf : new file format with flexible meta data (beta) (#2398) 2023-08-21 23:07:43 +03:00
test-opt.cpp check C++ code with -Wmissing-declarations (#3184) 2023-09-15 15:38:27 -04:00
test-quantize-fns.cpp check C++ code with -Wmissing-declarations (#3184) 2023-09-15 15:38:27 -04:00
test-quantize-perf.cpp check C++ code with -Wmissing-declarations (#3184) 2023-09-15 15:38:27 -04:00
test-rope.cpp llama : custom attention mask + parallel decoding + no context swaps (#3228) 2023-09-28 19:04:36 +03:00
test-sampling.cpp check C++ code with -Wmissing-declarations (#3184) 2023-09-15 15:38:27 -04:00
test-tokenizer-0-falcon.cpp llama.cpp : split llama_context_params into model and context params (#3301) 2023-09-28 22:42:38 +03:00
test-tokenizer-0-falcon.py llama : more tokenizer fixes (#2810) 2023-08-27 14:19:19 +03:00
test-tokenizer-0-llama.cpp llama.cpp : split llama_context_params into model and context params (#3301) 2023-09-28 22:42:38 +03:00
test-tokenizer-0-llama.py llama : more tokenizer fixes (#2810) 2023-08-27 14:19:19 +03:00
test-tokenizer-1-llama.cpp llama.cpp : split llama_context_params into model and context params (#3301) 2023-09-28 22:42:38 +03:00