llama.cpp/.github/workflows
Eve 017efe899d
cmake : make LLAMA_NATIVE flag actually use the instructions supported by the processor (#3273)
* fix LLAMA_NATIVE

* syntax

* alternate implementation

* my eyes must be getting bad...

* set cmake LLAMA_NATIVE=ON by default

* march=native doesn't work for ios/tvos, so disable for those targets. also see what happens if we use it on msvc

* revert 8283237 and only allow LLAMA_NATIVE on x86 like the Makefile

* remove -DLLAMA_MPI=ON

---------

Co-authored-by: netrunnereve <netrunnereve@users.noreply.github.com>
2023-10-03 19:53:15 +03:00
..
build.yml cmake : make LLAMA_NATIVE flag actually use the instructions supported by the processor (#3273) 2023-10-03 19:53:15 +03:00
code-coverage.yml cov : add Code Coverage and codecov.io integration (#2928) 2023-09-03 11:48:49 +03:00
docker.yml docker : add gpu image CI builds (#3103) 2023-09-14 19:47:00 +03:00
editorconfig.yml Fix whitespace, add .editorconfig, add GitHub workflow (#883) 2023-04-11 19:45:44 +00:00
gguf-publish.yml CI: add FreeBSD & simplify CUDA windows (#3053) 2023-09-14 19:21:25 +02:00
tidy-post.yml ci : disable auto tidy (#1705) 2023-06-05 23:05:05 +03:00
tidy-review.yml Add clang-tidy reviews to CI (#1407) 2023-05-12 15:40:53 +02:00