llama.cpp/examples
unbounded 62cfc54f77
Add quantize-stats command for testing quantization (#728)
Command that calculates some statistics over the errors introduced by
quantization, like mean square error, max error and some percentile errors for layer
weights. Should be useful for testing quantization improvements.

Exposes some internal state from ggml and llama for testing
2023-04-08 00:09:18 +02:00
..
embedding llama : fix linkage with mingw (#551) 2023-03-28 21:23:09 +03:00
main Do not crash when it has nothing to say. (#796) 2023-04-06 17:59:11 +02:00
perplexity llama : fix linkage with mingw (#551) 2023-03-28 21:23:09 +03:00
quantize Fix ggml_init_params in quantize 2023-03-30 12:28:25 -07:00
quantize-stats Add quantize-stats command for testing quantization (#728) 2023-04-08 00:09:18 +02:00
alpaca.sh Move chat scripts into "./examples" 2023-03-25 20:37:09 +02:00
chat-13B.bat Create chat-13B.bat (#592) 2023-03-29 20:21:09 +03:00
chat-13B.sh Move chat scripts into "./examples" 2023-03-25 20:37:09 +02:00
chat.sh If n_predict == -1, generate forever 2023-03-25 21:51:41 +02:00
CMakeLists.txt Add quantize-stats command for testing quantization (#728) 2023-04-08 00:09:18 +02:00
common.cpp fix default params for examples/main (#697) 2023-04-02 04:41:12 +02:00
common.h main.cpp fixes, refactoring (#571) 2023-03-28 17:09:55 +03:00
gpt4all.sh examples : add gpt4all script (#658) 2023-04-02 10:56:20 +03:00
Miku.sh miku.sh : add executable bit (#780) 2023-04-05 18:59:13 +03:00
reason-act.sh add example of re-act pattern (#583) 2023-03-29 10:10:24 -05:00