llama.cpp/models/ggml-vocab-stablelm-3b-4e1t.gguf
Galunid 36eed0c42c
stablelm : StableLM support (#3586)
* Add support for stablelm-3b-4e1t
* Supports GPU offloading of (n-1) layers
2023-11-14 11:17:12 +01:00

1.7 MiB