llama.cpp/common
Jhen-Jie Hong 97af49fa39
server : reuse llama_sample_token common util (#3494)
* server : reuse llama_sample_token common function

* common : use n_probs for temperature sampling
2023-10-06 15:44:24 +03:00
..
CMakeLists.txt train : finetune LORA (#2632) 2023-09-28 21:40:11 +03:00
common.cpp server : reuse llama_sample_token common util (#3494) 2023-10-06 15:44:24 +03:00
common.h infill : add new example + extend server API (#3296) 2023-10-02 10:42:02 +03:00
console.cpp check C++ code with -Wmissing-declarations (#3184) 2023-09-15 15:38:27 -04:00
console.h gguf : new file format with flexible meta data (beta) (#2398) 2023-08-21 23:07:43 +03:00
grammar-parser.cpp check C++ code with -Wmissing-declarations (#3184) 2023-09-15 15:38:27 -04:00
grammar-parser.h gguf : new file format with flexible meta data (beta) (#2398) 2023-08-21 23:07:43 +03:00
log.h build : enable more non-default compiler warnings (#3200) 2023-09-28 17:41:44 -04:00
train.cpp llama.cpp : split llama_context_params into model and context params (#3301) 2023-09-28 22:42:38 +03:00
train.h train : finetune LORA (#2632) 2023-09-28 21:40:11 +03:00