llama.cpp/common
slaren 16bc66d947
llama.cpp : split llama_context_params into model and context params (#3301)
* llama.cpp : split llama_context_params into model and context params

ggml-ci

* fix metal build

* fix freq_base/scale default to model value

* llama-bench : keep the same model between tests when possible

* move n_threads to llama_context_params, add n_threads_batch

* fix mpi build

* remove kv_size(), cuda scratch fixes

* remove low-vram option

* add n_threads_batch to system info, refactor to get_system_info()

* add documentation about --threads-batch to the READMEs

* llama-bench fix

* main : fix rope freq/scale warning

* llama.cpp : add llama_get_model
common : add llama_tokenize from model

* remove duplicated ctx/model functions

ggml-ci

* cuda : print total VRAM used
2023-09-28 22:42:38 +03:00
..
CMakeLists.txt train : finetune LORA (#2632) 2023-09-28 21:40:11 +03:00
common.cpp llama.cpp : split llama_context_params into model and context params (#3301) 2023-09-28 22:42:38 +03:00
common.h llama.cpp : split llama_context_params into model and context params (#3301) 2023-09-28 22:42:38 +03:00
console.cpp check C++ code with -Wmissing-declarations (#3184) 2023-09-15 15:38:27 -04:00
console.h gguf : new file format with flexible meta data (beta) (#2398) 2023-08-21 23:07:43 +03:00
grammar-parser.cpp check C++ code with -Wmissing-declarations (#3184) 2023-09-15 15:38:27 -04:00
grammar-parser.h gguf : new file format with flexible meta data (beta) (#2398) 2023-08-21 23:07:43 +03:00
log.h examples : replace fprintf to stdout with printf (#3017) 2023-09-05 15:10:27 -04:00
train.cpp llama.cpp : split llama_context_params into model and context params (#3301) 2023-09-28 22:42:38 +03:00
train.h train : finetune LORA (#2632) 2023-09-28 21:40:11 +03:00