llama.cpp/examples
2023-05-17 22:12:01 +00:00
..
baby-llama ggml : implement backward pass for llama + small training-llama-from-scratch example (#1360) 2023-05-13 15:56:40 +03:00
benchmark benchmark-matmul: Print the average of the test results (#1490) 2023-05-17 16:47:58 +02:00
embedding define default model path once, sync path with readme (#1366) 2023-05-16 17:46:34 +02:00
jeopardy examples : add Jeopardy example (#1168) 2023-04-28 19:13:33 +03:00
main define default model path once, sync path with readme (#1366) 2023-05-16 17:46:34 +02:00
perplexity define default model path once, sync path with readme (#1366) 2023-05-16 17:46:34 +02:00
quantize ggml : remove bit shuffling (#1405) 2023-05-12 00:23:08 +03:00
quantize-stats Remove unused n_parts parameter (#1509) 2023-05-17 22:12:01 +00:00
save-load-state Remove unused n_parts parameter (#1509) 2023-05-17 22:12:01 +00:00
alpaca.sh examples : Improve Alpaca Default Repeat Penalty: Better Match Alpaca.cpp Experience (#1107) 2023-04-22 09:54:33 +03:00
chat-13B.bat Create chat-13B.bat (#592) 2023-03-29 20:21:09 +03:00
chat-13B.sh examples : read chat prompts from a template file (#1196) 2023-05-03 20:58:11 +03:00
chat.sh If n_predict == -1, generate forever 2023-03-25 21:51:41 +02:00
CMakeLists.txt ggml : implement backward pass for llama + small training-llama-from-scratch example (#1360) 2023-05-13 15:56:40 +03:00
common.cpp Remove unused n_parts parameter (#1509) 2023-05-17 22:12:01 +00:00
common.h Remove unused n_parts parameter (#1509) 2023-05-17 22:12:01 +00:00
gpt4all.sh examples : add -n to alpaca and gpt4all scripts (#706) 2023-04-13 16:03:39 +03:00
Miku.sh examples : various prompt and example fixes (#1298) 2023-05-03 18:26:47 +03:00
reason-act.sh add example of re-act pattern (#583) 2023-03-29 10:10:24 -05:00