llama.cpp/scripts
Georgi Gerganov f9a6364912
llama : require first token to be BOS (#1303)
* llama : require first token to be BOS

* scripts : add ppl-run-all.sh

* perplexity : add BOS for each chunk

* readme : update perplexity values after BOS fix

* perplexity : add clarifying comments
2023-05-08 17:41:54 +03:00
..
build-info.cmake fix build-info.h for git submodules (#1289) 2023-05-03 02:43:43 +02:00
build-info.h.in fix build-info.h for git submodules (#1289) 2023-05-03 02:43:43 +02:00
build-info.sh Add git-based build information for better issue tracking (#1232) 2023-05-01 18:23:47 +02:00
ppl-run-all.sh llama : require first token to be BOS (#1303) 2023-05-08 17:41:54 +03:00
sync-ggml.sh scripts : add helper scripts to synch ggml repo 2023-04-23 19:57:09 +03:00
verify-checksum-models.py minor : fix whitespaces (#1302) 2023-05-03 20:09:42 +03:00