llama.cpp/examples/export-lora
Georgi Gerganov afefa319f1
ggml : change ggml_scale to take a float instead of tensor (#4573)
* ggml : change ggml_scale to take a float instead of tensor

* ggml : fix CPU implementation

* tests : fix test-grad0

ggml-ci
2023-12-21 23:20:49 +02:00
..
CMakeLists.txt train : finetune LORA (#2632) 2023-09-28 21:40:11 +03:00
export-lora.cpp ggml : change ggml_scale to take a float instead of tensor (#4573) 2023-12-21 23:20:49 +02:00
README.md train : finetune LORA (#2632) 2023-09-28 21:40:11 +03:00

export-lora

Apply LORA adapters to base model and export the resulting model.

usage: export-lora [options]

options:
  -h, --help                         show this help message and exit
  -m FNAME, --model-base FNAME       model path from which to load base model (default '')
  -o FNAME, --model-out FNAME        path to save exported model (default '')
  -l FNAME, --lora FNAME             apply LoRA adapter
  -s FNAME S, --lora-scaled FNAME S  apply LoRA adapter with user defined scaling S
  -t N, --threads N                  number of threads to use during computation (default: 4)

For example:

./bin/export-lora \
    -m open-llama-3b-v2-q8_0.gguf \
    -o open-llama-3b-v2-q8_0-english2tokipona-chat.gguf \
    -l lora-open-llama-3b-v2-q8_0-english2tokipona-chat-LATEST.bin

Multiple LORA adapters can be applied by passing multiple -l FN or -s FN S command line parameters.