llama.cpp/examples/quantize
Kerfuffle 74d4cfa343
Allow "quantizing" to f16 and f32 (#1787)
* Allow "quantizing" to f16 and f32

Fix an issue where quantizing didn't respect LLAMA_NO_K_QUANTS

Add brief help to the list of quantization types in the quantize tool

Ignore case for quantization type arguments in the quantize tool
2023-06-13 04:23:23 -06:00
..
CMakeLists.txt Add git-based build information for better issue tracking (#1232) 2023-05-01 18:23:47 +02:00
quantize.cpp Allow "quantizing" to f16 and f32 (#1787) 2023-06-13 04:23:23 -06:00
README.md Overhaul the examples structure 2023-03-25 20:26:40 +02:00

quantize

TODO