llama.cpp/common
automaticcat 24a447e20a
ggml : add ggml_cpu_has_avx_vnni() (#4589)
* feat: add avx_vnni based on intel documents

* ggml: add avx vnni based on intel document

* llama: add avx vnni information display

* docs: add more details about using oneMKL and oneAPI for intel processors

* docs: add more details about using oneMKL and oneAPI for intel processors

* docs: add more details about using oneMKL and oneAPI for intel processors

* docs: add more details about using oneMKL and oneAPI for intel processors

* docs: add more details about using oneMKL and oneAPI for intel processors

* Update ggml.c

Fix indentation upgate

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-12-30 10:07:48 +02:00
..
CMakeLists.txt cmake : fix ld warning duplicate libraries libllama.a (#4671) 2023-12-29 16:39:15 +02:00
base64.hpp llava : expose as a shared library for downstream projects (#3613) 2023-11-07 00:36:23 +03:00
build-info.cpp.in build : link against build info instead of compiling against it (#3879) 2023-11-02 08:50:16 +02:00
common.cpp ggml : add ggml_cpu_has_avx_vnni() (#4589) 2023-12-30 10:07:48 +02:00
common.h lookup : add prompt lookup decoding example (#4484) 2023-12-22 18:05:56 +02:00
console.cpp check C++ code with -Wmissing-declarations (#3184) 2023-09-15 15:38:27 -04:00
console.h gguf : new file format with flexible meta data (beta) (#2398) 2023-08-21 23:07:43 +03:00
grammar-parser.cpp grammar-parser : fix typo (#4318) 2023-12-04 09:57:35 +02:00
grammar-parser.h gguf : new file format with flexible meta data (beta) (#2398) 2023-08-21 23:07:43 +03:00
log.h english : use `typos` to fix comments and logs (#4354) 2023-12-12 11:53:36 +02:00
sampling.cpp server : allow to specify custom prompt for penalty calculation (#3727) 2023-12-23 11:31:49 +02:00
sampling.h server : allow to specify custom prompt for penalty calculation (#3727) 2023-12-23 11:31:49 +02:00
stb_image.h examples: support LLaVA v1.5 (multimodal model) (#3436) 2023-10-12 18:23:18 +03:00
train.cpp ggml : remove n_dims from ggml_tensor (#4469) 2023-12-14 16:52:08 +01:00
train.h sync : ggml (backend v2) (#3912) 2023-11-13 14:16:23 +02:00