llama.cpp/examples
Gary Linscott be87b6ed20
perplexity : add support for batch size to --perplexity (#407)
* Add support to batch size for perplexity

* Revert "Fix memory allocation issues and seg faults"

This reverts commit 4870e455b3.

* update from merge

* Remove perplexity from main

* updates

* Update batch size for efficiency
2023-04-14 00:50:42 +03:00
..
benchmark fix whitespace (#944) 2023-04-13 16:03:57 +02:00
embedding Fix whitespace, add .editorconfig, add GitHub workflow (#883) 2023-04-11 19:45:44 +00:00
main Fix whitespace, add .editorconfig, add GitHub workflow (#883) 2023-04-11 19:45:44 +00:00
perplexity perplexity : add support for batch size to --perplexity (#407) 2023-04-14 00:50:42 +03:00
quantize Add enum llama_ftype, sync ggml_type to model files (#709) 2023-04-11 15:03:51 +00:00
quantize-stats llama : merge llama_internal.h into llama.h 2023-04-13 18:04:45 +03:00
alpaca.sh examples : add -n to alpaca and gpt4all scripts (#706) 2023-04-13 16:03:39 +03:00
chat-13B.bat Create chat-13B.bat (#592) 2023-03-29 20:21:09 +03:00
chat-13B.sh Move chat scripts into "./examples" 2023-03-25 20:37:09 +02:00
chat.sh If n_predict == -1, generate forever 2023-03-25 21:51:41 +02:00
CMakeLists.txt Add quantize-stats command for testing quantization (#728) 2023-04-08 00:09:18 +02:00
common.cpp common : remove unnecessary includes (#947) 2023-04-13 18:39:25 +03:00
common.h Rewrite loading code to try to satisfy everyone: 2023-04-10 01:10:46 +02:00
gpt4all.sh examples : add -n to alpaca and gpt4all scripts (#706) 2023-04-13 16:03:39 +03:00
Miku.sh Fix whitespace, add .editorconfig, add GitHub workflow (#883) 2023-04-11 19:45:44 +00:00
reason-act.sh add example of re-act pattern (#583) 2023-03-29 10:10:24 -05:00