llama.cpp/examples
2023-05-02 19:23:44 +03:00
..
benchmark Add git-based build information for better issue tracking (#1232) 2023-05-01 18:23:47 +02:00
embedding llama : allow 0 as a seed number. (#1275) 2023-05-02 19:23:44 +03:00
jeopardy examples : add Jeopardy example (#1168) 2023-04-28 19:13:33 +03:00
main llama : allow 0 as a seed number. (#1275) 2023-05-02 19:23:44 +03:00
perplexity llama : allow 0 as a seed number. (#1275) 2023-05-02 19:23:44 +03:00
quantize Add git-based build information for better issue tracking (#1232) 2023-05-01 18:23:47 +02:00
quantize-stats Add git-based build information for better issue tracking (#1232) 2023-05-01 18:23:47 +02:00
save-load-state Add git-based build information for better issue tracking (#1232) 2023-05-01 18:23:47 +02:00
alpaca.sh examples : Improve Alpaca Default Repeat Penalty: Better Match Alpaca.cpp Experience (#1107) 2023-04-22 09:54:33 +03:00
chat-13B.bat Create chat-13B.bat (#592) 2023-03-29 20:21:09 +03:00
chat-13B.sh llama : add session file format and saved sessions in main (#1169) 2023-04-28 18:59:37 +03:00
chat.sh If n_predict == -1, generate forever 2023-03-25 21:51:41 +02:00
CMakeLists.txt Various fixes to mat_mul benchmark (#1253) 2023-04-30 12:32:37 +00:00
common.cpp llama : allow 0 as a seed number. (#1275) 2023-05-02 19:23:44 +03:00
common.h common : better default number of threads (#934) 2023-04-30 21:41:35 +03:00
gpt4all.sh examples : add -n to alpaca and gpt4all scripts (#706) 2023-04-13 16:03:39 +03:00
Miku.sh Fix whitespace, add .editorconfig, add GitHub workflow (#883) 2023-04-11 19:45:44 +00:00
reason-act.sh add example of re-act pattern (#583) 2023-03-29 10:10:24 -05:00