llama.cpp/examples
zrm b853d45601
ggml : add NUMA support (#1556)
* detect NUMA systems and pin work threads to nodes (linux)

* disable mmap prefetch/readahead for NUMA systems

* avoid sending finalize op to thread pool if it does nothing

* silence robot

* fix args

* make --numa a param

* recommendation that n_nodes evenly divide n_threads did not warrant such aggressive enforcement

* lower synchronization overhead

* statically allocate

* move numa state to g_state

* add description for --numa

* ggml : minor style changes

* ggml : minor style + try fix sanitizer build

* llama : allow to initialize backend with NUMA support

* llama : avoid ggml include in llama-util.h

* ggml : style / formatting

* ggml : fix handling of ops with n_threads > n_tasks > 1

* server : utilize numa parameter

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-06-26 20:57:59 +03:00
..
baby-llama build : fix and ignore MSVC warnings (#1889) 2023-06-16 21:23:53 +03:00
benchmark build : fix and ignore MSVC warnings (#1889) 2023-06-16 21:23:53 +03:00
embedding ggml : add NUMA support (#1556) 2023-06-26 20:57:59 +03:00
jeopardy hooks : setting up flake8 and pre-commit hooks (#1681) 2023-06-17 13:32:48 +03:00
main ggml : add NUMA support (#1556) 2023-06-26 20:57:59 +03:00
metal examples : fix examples/metal (#1920) 2023-06-18 10:52:10 +03:00
perplexity ggml : add NUMA support (#1556) 2023-06-26 20:57:59 +03:00
quantize ggml : add NUMA support (#1556) 2023-06-26 20:57:59 +03:00
quantize-stats llama : make model stateless and context stateful (llama_state) (#1797) 2023-06-24 11:47:58 +03:00
save-load-state llama : make model stateless and context stateful (llama_state) (#1797) 2023-06-24 11:47:58 +03:00
server ggml : add NUMA support (#1556) 2023-06-26 20:57:59 +03:00
simple ggml : add NUMA support (#1556) 2023-06-26 20:57:59 +03:00
train-text-from-scratch llama : make model stateless and context stateful (llama_state) (#1797) 2023-06-24 11:47:58 +03:00
alpaca.sh examples : Improve Alpaca Default Repeat Penalty: Better Match Alpaca.cpp Experience (#1107) 2023-04-22 09:54:33 +03:00
chat-13B.bat Create chat-13B.bat (#592) 2023-03-29 20:21:09 +03:00
chat-13B.sh examples : read chat prompts from a template file (#1196) 2023-05-03 20:58:11 +03:00
chat-persistent.sh chat-persistent.sh : use bracket expressions in grep (#1564) 2023-05-24 09:16:22 +03:00
chat-vicuna.sh examples : add chat-vicuna.sh (#1854) 2023-06-15 21:05:53 +03:00
chat.sh If n_predict == -1, generate forever 2023-03-25 21:51:41 +02:00
CMakeLists.txt llama : fix kv_cache n init (close #1903) 2023-06-17 19:31:20 +03:00
common.cpp ggml : add NUMA support (#1556) 2023-06-26 20:57:59 +03:00
common.h ggml : add NUMA support (#1556) 2023-06-26 20:57:59 +03:00
gpt4all.sh examples : add -n to alpaca and gpt4all scripts (#706) 2023-04-13 16:03:39 +03:00
Miku.sh examples : various prompt and example fixes (#1298) 2023-05-03 18:26:47 +03:00
reason-act.sh add example of re-act pattern (#583) 2023-03-29 10:10:24 -05:00