llama.cpp/common
vvhg1 c97f01c362
infill : add new example + extend server API (#3296)
* vvhg-code-infill (#1)

* infill in separate example (#2)

* reverted changes to main and added infill example

* cleanup

* naming improvement

* make : add missing blank line

* fix missing semicolon

* brought infill up to current main code

* cleanup

---------

Co-authored-by: Cebtenzzre <cebtenzzre@gmail.com>
2023-10-02 10:42:02 +03:00
..
CMakeLists.txt train : finetune LORA (#2632) 2023-09-28 21:40:11 +03:00
common.cpp infill : add new example + extend server API (#3296) 2023-10-02 10:42:02 +03:00
common.h infill : add new example + extend server API (#3296) 2023-10-02 10:42:02 +03:00
console.cpp check C++ code with -Wmissing-declarations (#3184) 2023-09-15 15:38:27 -04:00
console.h gguf : new file format with flexible meta data (beta) (#2398) 2023-08-21 23:07:43 +03:00
grammar-parser.cpp check C++ code with -Wmissing-declarations (#3184) 2023-09-15 15:38:27 -04:00
grammar-parser.h gguf : new file format with flexible meta data (beta) (#2398) 2023-08-21 23:07:43 +03:00
log.h build : enable more non-default compiler warnings (#3200) 2023-09-28 17:41:44 -04:00
train.cpp llama.cpp : split llama_context_params into model and context params (#3301) 2023-09-28 22:42:38 +03:00
train.h train : finetune LORA (#2632) 2023-09-28 21:40:11 +03:00