llama.cpp/tests
Evan Miller 5656d10599
mpi : add support for distributed inference via MPI (#2099)
* MPI support, first cut

* fix warnings, update README

* fixes

* wrap includes

* PR comments

* Update CMakeLists.txt

* Add GH workflow, fix test

* Add info to README

* mpi : trying to move more MPI stuff into ggml-mpi (WIP) (#2099)

* mpi : add names for layer inputs + prep ggml_mpi_graph_compute()

* mpi : move all MPI logic into ggml-mpi

Not tested yet

* mpi : various fixes - communication now works but results are wrong

* mpi : fix output tensor after MPI compute (still not working)

* mpi : fix inference

* mpi : minor

* Add OpenMPI to GH action

* [mpi] continue-on-error: true

* mpi : fix after master merge

* [mpi] Link MPI C++ libraries to fix OpenMPI

* tests : fix new llama_backend API

* [mpi] use MPI_INT32_T

* mpi : factor out recv / send in functions and reuse

* mpi : extend API to allow usage with outer backends (e.g. Metal)

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-07-10 18:49:56 +03:00
..
CMakeLists.txt ggml : change ggml_graph_compute() API to not require context (#1999) 2023-07-07 19:24:01 +03:00
test-double-float.c all : be more strict about converting float to double (#458) 2023-03-28 19:48:20 +03:00
test-grad0.c ggml : change ggml_graph_compute() API to not require context (#1999) 2023-07-07 19:24:01 +03:00
test-opt.c ggml : change ggml_graph_compute() API to not require context (#1999) 2023-07-07 19:24:01 +03:00
test-quantize-fns.cpp ggml : generalize quantize_fns for simpler FP16 handling (#1237) 2023-07-05 19:13:06 +03:00
test-quantize-perf.cpp ggml : generalize quantize_fns for simpler FP16 handling (#1237) 2023-07-05 19:13:06 +03:00
test-sampling.cpp llama : fix top-p sampling to match the canonical definition (#1953) 2023-06-24 13:15:01 +03:00
test-tokenizer-0.cpp mpi : add support for distributed inference via MPI (#2099) 2023-07-10 18:49:56 +03:00