llama.cpp/.devops
Henri Vasserman 6bbc598a63
ROCm Port (#1087)
* use hipblas based on cublas
* Update Makefile for the Cuda kernels
* Expand arch list and make it overrideable
* Fix multi GPU on multiple amd architectures with rocblas_initialize() (#5)
* add hipBLAS to README
* new build arg LLAMA_CUDA_MMQ_Y
* fix half2 decomposition
* Add intrinsics polyfills for AMD
* AMD assembly optimized __dp4a
* Allow overriding CC_TURING
* use "ROCm" instead of "CUDA"
* ignore all build dirs
* Add Dockerfiles
* fix llama-bench
* fix -nommq help for non CUDA/HIP

---------

Co-authored-by: YellowRoseCx <80486540+YellowRoseCx@users.noreply.github.com>
Co-authored-by: ardfork <134447697+ardfork@users.noreply.github.com>
Co-authored-by: funnbot <22226942+funnbot@users.noreply.github.com>
Co-authored-by: Engininja2 <139037756+Engininja2@users.noreply.github.com>
Co-authored-by: Kerfuffle <44031344+KerfuffleV2@users.noreply.github.com>
Co-authored-by: jammm <2500920+jammm@users.noreply.github.com>
Co-authored-by: jdecourval <7315817+jdecourval@users.noreply.github.com>
2023-08-25 12:09:42 +03:00
..
full-cuda.Dockerfile docker : add support for CUDA in docker (#1461) 2023-07-07 21:25:25 +03:00
full-rocm.Dockerfile ROCm Port (#1087) 2023-08-25 12:09:42 +03:00
full.Dockerfile Add llama.cpp docker support for non-latin languages (#1673) 2023-06-08 00:58:53 -07:00
lamma-cpp-clblast.srpm.spec devops : RPM Specs (#2723) 2023-08-23 17:28:22 +03:00
lamma-cpp-cublas.srpm.spec devops : RPM Specs (#2723) 2023-08-23 17:28:22 +03:00
llama-cpp.srpm.spec devops : RPM Specs (#2723) 2023-08-23 17:28:22 +03:00
main-cuda.Dockerfile docker : add support for CUDA in docker (#1461) 2023-07-07 21:25:25 +03:00
main-rocm.Dockerfile ROCm Port (#1087) 2023-08-25 12:09:42 +03:00
main.Dockerfile Add llama.cpp docker support for non-latin languages (#1673) 2023-06-08 00:58:53 -07:00
tools.sh devops : add missing quotes to bash script (#2193) 2023-07-13 16:49:14 +03:00