llama.cpp/examples/quantize
Georgi Gerganov 7a32fcb3b2
ggml : add Q8_0 quantization format (rename the old one to Q8_1) (ARM NEON) (#1179)
* ggml : add Q8_0 quantization format (rename the old one to Q8_1)

* tests : fix test-quantize-fns

* ggml : finalize Q8_0 implementation

* ggml : use q4_0_q8_0 and q4_2_q8_0

* ggml : fix Q8_0 dot product bug (ARM)

* ggml : Q8_0 unroll x2

* ggml : fix bug - using wrong block type

* ggml : extend quantize_fns_t with "vec_dot_type"

* ggml : fix Q8_0 to use 255 values out of 256

* ggml : fix assert using wrong QK4_2 instead of QK4_3
2023-04-25 23:40:51 +03:00
..
CMakeLists.txt llama : fix linkage with mingw (#551) 2023-03-28 21:23:09 +03:00
quantize.cpp ggml : add Q8_0 quantization format (rename the old one to Q8_1) (ARM NEON) (#1179) 2023-04-25 23:40:51 +03:00
README.md Overhaul the examples structure 2023-03-25 20:26:40 +02:00

quantize

TODO