llama.cpp/.gitignore
Georgi Gerganov f9a6364912
llama : require first token to be BOS (#1303)
* llama : require first token to be BOS

* scripts : add ppl-run-all.sh

* perplexity : add BOS for each chunk

* readme : update perplexity values after BOS fix

* perplexity : add clarifying comments
2023-05-08 17:41:54 +03:00

49 lines
460 B
Plaintext

*.o
*.a
.DS_Store
.build/
.cache/
.direnv/
.envrc
.swiftpm
.venv
.vs/
.vscode/
build/
build-em/
build-debug/
build-release/
build-static/
build-cublas/
build-no-accel/
build-sanitize-addr/
build-sanitize-thread/
models/*
*.bin
/main
/quantize
/quantize-stats
/result
/perplexity
/embedding
/benchmark-matmult
/vdot
/Pipfile
build-info.h
arm_neon.h
compile_commands.json
__pycache__
zig-out/
zig-cache/
ppl-*.txt
qnt-*.txt
examples/jeopardy/results.txt