llama.cpp/common
Andrew Godfrey 73bdcb395e
finetune : add -ngl parameter (#3762)
* Add '-ngl' support to finetune.cpp

* Add fprintf in ggml_cuda_op_add

When I tried CUDA offloading during finetuning following the readme, I got an assert here.
This probably isn't an important case because inference later gives a warning saying you should use f16 or f32 instead when using lora

* Add 'finetune.sh', which currently fails when using GPU

"error: operator (): Finetuning on tensors with type 'f16' is not yet supported"

* tweak finetune.sh

* Suppress some warnings in ggml.c

* Add f16 implementation to ggml_compute_forward_add_f16_f32

* Add an f16 case to ggml_add_cast_impl and llama_build_lora_finetune_graphs

* finetune.sh: Edit comments

* Add "add_f16_f32_f32_cuda"

* Tweak an error message

* finetune.sh: Add an optional LLAMA_MODEL_DIR variable

* finetune.sh: Add an optional LLAMA_TRAINING_DIR variable

* train : minor

* tabs to spaces

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: cebtenzzre <cebtenzzre@gmail.com>
2023-11-01 13:49:04 +02:00
..
CMakeLists.txt common : fix mirostat state when using multiple sequences (#3543) 2023-10-11 22:35:46 +03:00
common.cpp samplers : Min-P sampler implementation [alternative to Top P/Top K] (#3841) 2023-10-31 20:44:49 +01:00
common.h sampling : refactor init to use llama_sampling_params (#3696) 2023-10-20 21:07:23 +03:00
console.cpp check C++ code with -Wmissing-declarations (#3184) 2023-09-15 15:38:27 -04:00
console.h gguf : new file format with flexible meta data (beta) (#2398) 2023-08-21 23:07:43 +03:00
grammar-parser.cpp ggml : fix rope + llama minor optimizations (#3560) 2023-10-20 13:02:12 +03:00
grammar-parser.h gguf : new file format with flexible meta data (beta) (#2398) 2023-08-21 23:07:43 +03:00
log.h log : disable pid in log filenames 2023-10-25 10:09:16 +03:00
sampling.cpp samplers : Min-P sampler implementation [alternative to Top P/Top K] (#3841) 2023-10-31 20:44:49 +01:00
sampling.h samplers : Min-P sampler implementation [alternative to Top P/Top K] (#3841) 2023-10-31 20:44:49 +01:00
stb_image.h examples: support LLaVA v1.5 (multimodal model) (#3436) 2023-10-12 18:23:18 +03:00
train.cpp finetune : add -ngl parameter (#3762) 2023-11-01 13:49:04 +02:00
train.h finetune : add -ngl parameter (#3762) 2023-11-01 13:49:04 +02:00