llama.cpp/examples
Evan Miller 5656d10599
mpi : add support for distributed inference via MPI (#2099)
* MPI support, first cut

* fix warnings, update README

* fixes

* wrap includes

* PR comments

* Update CMakeLists.txt

* Add GH workflow, fix test

* Add info to README

* mpi : trying to move more MPI stuff into ggml-mpi (WIP) (#2099)

* mpi : add names for layer inputs + prep ggml_mpi_graph_compute()

* mpi : move all MPI logic into ggml-mpi

Not tested yet

* mpi : various fixes - communication now works but results are wrong

* mpi : fix output tensor after MPI compute (still not working)

* mpi : fix inference

* mpi : minor

* Add OpenMPI to GH action

* [mpi] continue-on-error: true

* mpi : fix after master merge

* [mpi] Link MPI C++ libraries to fix OpenMPI

* tests : fix new llama_backend API

* [mpi] use MPI_INT32_T

* mpi : factor out recv / send in functions and reuse

* mpi : extend API to allow usage with outer backends (e.g. Metal)

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-07-10 18:49:56 +03:00
..
baby-llama ggml : change ggml_graph_compute() API to not require context (#1999) 2023-07-07 19:24:01 +03:00
benchmark ggml : change ggml_graph_compute() API to not require context (#1999) 2023-07-07 19:24:01 +03:00
embd-input mpi : add support for distributed inference via MPI (#2099) 2023-07-10 18:49:56 +03:00
embedding mpi : add support for distributed inference via MPI (#2099) 2023-07-10 18:49:56 +03:00
jeopardy hooks : setting up flake8 and pre-commit hooks (#1681) 2023-06-17 13:32:48 +03:00
main mpi : add support for distributed inference via MPI (#2099) 2023-07-10 18:49:56 +03:00
metal ggml : change ggml_graph_compute() API to not require context (#1999) 2023-07-07 19:24:01 +03:00
perplexity mpi : add support for distributed inference via MPI (#2099) 2023-07-10 18:49:56 +03:00
quantize mpi : add support for distributed inference via MPI (#2099) 2023-07-10 18:49:56 +03:00
quantize-stats ggml : generalize quantize_fns for simpler FP16 handling (#1237) 2023-07-05 19:13:06 +03:00
save-load-state llama : make model stateless and context stateful (llama_state) (#1797) 2023-06-24 11:47:58 +03:00
server mpi : add support for distributed inference via MPI (#2099) 2023-07-10 18:49:56 +03:00
simple mpi : add support for distributed inference via MPI (#2099) 2023-07-10 18:49:56 +03:00
train-text-from-scratch ggml : change ggml_graph_compute() API to not require context (#1999) 2023-07-07 19:24:01 +03:00
alpaca.sh alpaca.sh : update model file name (#2074) 2023-07-06 19:17:50 +03:00
chat-13B.bat Create chat-13B.bat (#592) 2023-03-29 20:21:09 +03:00
chat-13B.sh examples : read chat prompts from a template file (#1196) 2023-05-03 20:58:11 +03:00
chat-persistent.sh chat-persistent.sh : use bracket expressions in grep (#1564) 2023-05-24 09:16:22 +03:00
chat-vicuna.sh examples : add chat-vicuna.sh (#1854) 2023-06-15 21:05:53 +03:00
chat.sh If n_predict == -1, generate forever 2023-03-25 21:51:41 +02:00
CMakeLists.txt llama : support input embeddings directly (#1910) 2023-06-28 18:53:37 +03:00
common.cpp main : escape prompt prefix/suffix (#2151) 2023-07-09 11:56:18 +03:00
common.h server: add option to output probabilities for completion (#1962) 2023-07-03 00:38:44 +03:00
gpt4all.sh examples : add -n to alpaca and gpt4all scripts (#706) 2023-04-13 16:03:39 +03:00
Miku.sh examples : various prompt and example fixes (#1298) 2023-05-03 18:26:47 +03:00
reason-act.sh add example of re-act pattern (#583) 2023-03-29 10:10:24 -05:00