llama.cpp/.gitignore

51 lines
485 B
Plaintext
Raw Normal View History

2023-03-10 19:40:58 +01:00
*.o
*.a
.DS_Store
.build/
2023-03-10 19:40:58 +01:00
.cache/
.direnv/
.envrc
.swiftpm
.venv
2023-03-10 19:40:58 +01:00
.vs/
.vscode/
build/
build-em/
build-debug/
build-release/
build-static/
build-cublas/
2023-05-13 10:23:15 +02:00
build-opencl/
2023-03-10 19:40:58 +01:00
build-no-accel/
build-sanitize-addr/
build-sanitize-thread/
2023-03-11 11:26:46 +01:00
models/*
*.bin
2023-03-11 11:26:46 +01:00
2023-03-10 19:40:58 +01:00
/main
/quantize
/quantize-stats
/result
/perplexity
2023-03-28 17:34:35 +02:00
/embedding
/benchmark-matmult
2023-04-18 22:00:08 +02:00
/vdot
/Pipfile
2023-03-10 19:40:58 +01:00
build-info.h
2023-03-10 19:40:58 +01:00
arm_neon.h
compile_commands.json
__pycache__
zig-out/
zig-cache/
ppl-*.txt
qnt-*.txt
ggml : remove bit shuffling (#1405) * ggml : remove Q4_0 bit shufling (ARM NEON) * ggml : remove Q4_1 bit shuffling (ARM NEON + reference) * ggml : nibbles_from_floats() + bytes_from_nibbles() (ARM NEON) * ggml : remove Q4_2 bit shuffling (WIP, BROKEN) * ggml : remove Q5_0 bit shuffling (ARM NEON) * ggml : 2x faster scalar implementations * ggml : remove Q5_1 bit shuffling (ARM NEON + scalar) * ggml : simplify scalar dot * ggml : remove WASM SIMD bit shuffling + remove vzip for ARM 32-bit * ggml : fix Q4_1 quantization * ggml : update cuBLAS + normalize variable names * ggml : remove Q4_2 mode * ggml : minor formatting * ggml : fix Q5_0 quantization * scripts : add script for measuring the time per token * AVX implementations (#1370) * ggml : uniform 5th bit extraction * llama : produce error upon loading old model files * llama : fix model magic/version write * ggml : speed-up Q5_0 + Q5_1 at 4 threads * ggml : preserve old Q4 and Q5 formats * ggml : simplify Q8_1 - no need for low / high sums anymore * ggml : fix Q8_0 and Q8_1 rounding * Revert "AVX implementations (#1370)" This reverts commit 948d124837f9d287d8490f41338e0e4cceb0814f. * ggml : fix AVX2 implementation * sha : update hashes for 7B and 13B * readme : update timings + remove warning banner * llama : update v2 PR number to 1405 * ggml : fix WASM comments * ggml : back to original bit order * readme : add note that Q4 and Q5 have been changed * llama : fix return for unknown version --------- Co-authored-by: Stephan Walter <stephan@walter.name>
2023-05-11 23:23:08 +02:00
perf-*.txt
examples/jeopardy/results.txt