llama.cpp/examples
2023-04-14 22:58:43 +03:00
..
benchmark fix whitespace (#944) 2023-04-13 16:03:57 +02:00
embedding Fix whitespace, add .editorconfig, add GitHub workflow (#883) 2023-04-11 19:45:44 +00:00
main Revert "main : alternative instruct mode (Vicuna support, etc.) (#863)" (#982) 2023-04-14 22:58:43 +03:00
perplexity perplexity : add support for batch size to --perplexity (#407) 2023-04-14 00:50:42 +03:00
quantize Add enum llama_ftype, sync ggml_type to model files (#709) 2023-04-11 15:03:51 +00:00
quantize-stats Expose type name from ggml (#970) 2023-04-14 20:05:37 +02:00
alpaca.sh examples : add -n to alpaca and gpt4all scripts (#706) 2023-04-13 16:03:39 +03:00
chat-13B.bat Create chat-13B.bat (#592) 2023-03-29 20:21:09 +03:00
chat-13B.sh Move chat scripts into "./examples" 2023-03-25 20:37:09 +02:00
chat.sh If n_predict == -1, generate forever 2023-03-25 21:51:41 +02:00
CMakeLists.txt Add quantize-stats command for testing quantization (#728) 2023-04-08 00:09:18 +02:00
common.cpp Revert "main : alternative instruct mode (Vicuna support, etc.) (#863)" (#982) 2023-04-14 22:58:43 +03:00
common.h Revert "main : alternative instruct mode (Vicuna support, etc.) (#863)" (#982) 2023-04-14 22:58:43 +03:00
gpt4all.sh examples : add -n to alpaca and gpt4all scripts (#706) 2023-04-13 16:03:39 +03:00
Miku.sh Fix whitespace, add .editorconfig, add GitHub workflow (#883) 2023-04-11 19:45:44 +00:00
reason-act.sh add example of re-act pattern (#583) 2023-03-29 10:10:24 -05:00