llama.cpp/examples
eiery 10f19c1121
llama : have n_batch default to 512 (#1091)
* set default n_batch to 512 when using BLAS

* spacing

* alternate implementation of setting different n_batch for BLAS

* set n_batch to 512 for all cases
2023-04-22 11:27:05 +03:00
..
benchmark benchmark : fix result validation in benchmark-q4_0-matmult (#987) 2023-04-15 08:51:54 +03:00
embedding examples: add missing <ctime> include for time() (#1011) 2023-04-16 10:13:00 +00:00
main main : evaluate tokens in batches after swapping context (#1014) 2023-04-21 21:18:09 +03:00
perplexity Show perplexity ETA in hours and minutes (#1096) 2023-04-21 14:57:57 +02:00
quantize llama : multi-threaded quantization (#1075) 2023-04-20 20:42:27 +03:00
quantize-stats llama : multi-threaded quantization (#1075) 2023-04-20 20:42:27 +03:00
alpaca.sh examples : Improve Alpaca Default Repeat Penalty: Better Match Alpaca.cpp Experience (#1107) 2023-04-22 09:54:33 +03:00
chat-13B.bat Create chat-13B.bat (#592) 2023-03-29 20:21:09 +03:00
chat-13B.sh Move chat scripts into "./examples" 2023-03-25 20:37:09 +02:00
chat.sh If n_predict == -1, generate forever 2023-03-25 21:51:41 +02:00
CMakeLists.txt Add quantize-stats command for testing quantization (#728) 2023-04-08 00:09:18 +02:00
common.cpp Add LoRA support (#820) 2023-04-17 17:28:55 +02:00
common.h llama : have n_batch default to 512 (#1091) 2023-04-22 11:27:05 +03:00
gpt4all.sh examples : add -n to alpaca and gpt4all scripts (#706) 2023-04-13 16:03:39 +03:00
Miku.sh Fix whitespace, add .editorconfig, add GitHub workflow (#883) 2023-04-11 19:45:44 +00:00
reason-act.sh add example of re-act pattern (#583) 2023-03-29 10:10:24 -05:00