llama.cpp/examples
bmwl f486f6e1e5
ggml : add numa options (#5377)
* Added numa options to allow finer grained control as well as plumbing for a new mirror mode that will require numa.h

* Reverted Makefile

* Fixed include

* Removed sched.h from ggml.h, moved ggml_get_numa_affinity into ggml.c, removed trailing whitespace and fixed up a few inconsistent variables

* removed trailing whitespace

* Added numa options to allow finer grained control as well as plumbing for a new mirror mode that will require numa.h

* Reverting Makefile

* Fixed a number of issues with the move from BOOL to ggml_numa_strategies. Added a note about mirror mode note being implemented yet

* Removing MIRROR_MODE code for this PR

* Removing last bit of MIRROR_MODE code for this PR

* Removing unneeded branch in server.cpp example and moving get_numa_affinity and making it static

* Fixed lingering init_llama_backend() bool calls in tests and examples

* Remote enum llama_numa_strategies

* Revert bad merge with dynatemp flags

* add missing enum ggml_numa_strategies declaration and revert sync problem with master

* add missing enum ggml_numa_strategies declaration

* fixed ggml_init_numa variable

* Update ggml.h

Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>

* Update READMEs with info about numa flags, change INTERLEAVE strategy name to DISTRIBUTE everywhere, implement the improved distribution strategy from @rankaiyx, fix a spelling mistake and un-merge some bad merges

* split numa init out from llama_backend_init and created llama_numa_init. Updated all code paths and samples

* Fix up some boolean vs enum comparisons

* Added #ifdefs for non-Linux OS that don't have cpu_set_t datatype

* Update ggml.h

Align enum values

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Update ggml.c

Remove whitespace

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Update ggml.c

align paremeters

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Update examples/server/server.cpp

remove whitespace and align brace

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Update common/common.cpp

Remove whitespace and align brace

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* unified ggml_numa_strategy enum and fixed text alignment in server.cpp example

* Update ggml.c

simplified return for platforms without NUMA support

Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>

* removed redundant else from cli argument processing of --numa

* whitespace

---------

Co-authored-by: root <root@nenya.lothlorien.ca>
Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: Jared Van Bortel <jared@nomic.ai>
2024-02-16 11:31:07 +02:00
..
baby-llama ggml : change ggml_scale to take a float instead of tensor (#4573) 2023-12-21 23:20:49 +02:00
batched ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
batched-bench ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
batched.swift ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
beam-search ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
benchmark 2-bit quantizations (#4897) 2024-01-14 09:45:56 +02:00
convert-llama2c-to-ggml ggml : remove n_dims from ggml_tensor (#4469) 2023-12-14 16:52:08 +01:00
embedding ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
export-lora sync : ggml (#5452) 2024-02-12 09:16:06 +02:00
finetune finetune : rename feed-forward tensors (w1/w2/w3) (#4839) 2024-02-13 15:15:42 +02:00
gguf gguf : simplify example dependencies 2023-12-21 23:08:14 +02:00
imatrix ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
infill ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
jeopardy parallel : add option to load external prompt file (#3416) 2023-10-06 16:16:38 +03:00
llama-bench ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
llama.android ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
llama.swiftui ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
llava ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
lookahead ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
lookup ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
main ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
main-cmake-pkg main-cmake-pkg : fix build issue (#4665) 2023-12-29 16:18:20 +02:00
parallel ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
passkey ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
perplexity ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
quantize ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
quantize-stats refactor : switch to emplace_back to avoid extra object (#5291) 2024-02-03 13:23:37 +02:00
save-load-state llama : minimize size used for state save/load (#4820) 2024-01-13 18:29:43 +02:00
server ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
simple ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
speculative ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
sycl [SYCL] update guide of SYCL backend (#5254) 2024-02-02 15:53:27 +08:00
tokenize ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
train-text-from-scratch finetune : rename feed-forward tensors (w1/w2/w3) (#4839) 2024-02-13 15:15:42 +02:00
alpaca.sh alpaca.sh : update model file name (#2074) 2023-07-06 19:17:50 +03:00
base-translate.sh examples : improve base-translate.sh script (#4783) 2024-01-06 11:40:24 +02:00
chat-13B.bat Create chat-13B.bat (#592) 2023-03-29 20:21:09 +03:00
chat-13B.sh examples : read chat prompts from a template file (#1196) 2023-05-03 20:58:11 +03:00
chat-persistent.sh llama : fix session saving/loading (#3400) 2023-10-03 21:04:01 +03:00
chat-vicuna.sh examples : add chat-vicuna.sh (#1854) 2023-06-15 21:05:53 +03:00
chat.sh main : log file (#2748) 2023-08-30 09:29:32 +03:00
CMakeLists.txt gguf : add python reader example (#5216) 2024-02-13 19:56:38 +02:00
gpt4all.sh examples : add -n to alpaca and gpt4all scripts (#706) 2023-04-13 16:03:39 +03:00
json-schema-to-grammar.py chmod : make scripts executable (#2675) 2023-08-23 17:29:09 +03:00
llama.vim llama.vim : added api key support (#5090) 2024-01-23 08:51:27 +02:00
llama2-13b.sh gitignore : changes for Poetry users + chat examples (#2284) 2023-07-21 13:53:27 +03:00
llama2.sh gitignore : changes for Poetry users + chat examples (#2284) 2023-07-21 13:53:27 +03:00
llm.vim llm.vim : stop generation at multiple linebreaks, bind to <F2> (#2879) 2023-08-30 09:50:55 +03:00
make-ggml.py make-ggml.py : compatibility with more models and GGUF (#3290) 2023-09-27 19:25:12 +03:00
Miku.sh MIKU MAYHEM: Upgrading the Default Model for Maximum Fun 🎉 (#2287) 2023-07-21 11:13:18 +03:00
pydantic-models-to-grammar-examples.py examples : make pydantic scripts pass mypy and support py3.8 (#5099) 2024-01-25 14:51:24 -05:00
pydantic_models_to_grammar.py examples : make pydantic scripts pass mypy and support py3.8 (#5099) 2024-01-25 14:51:24 -05:00
reason-act.sh chmod : make scripts executable (#2675) 2023-08-23 17:29:09 +03:00
server-llama2-13B.sh chmod : make scripts executable (#2675) 2023-08-23 17:29:09 +03:00