readme : server compile flag (#1874)

Explicitly include the server make instructions for C++ noobsl like me ;)
This commit is contained in:
Srinivas Billa 2023-06-15 18:36:38 +01:00 committed by GitHub
parent 37e257c48e
commit 9dda13e5e1
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23

View file

@ -16,6 +16,10 @@ This example allow you to have a llama.cpp http server to interact from a web pa
To get started right away, run the following command, making sure to use the correct path for the model you have:
#### Unix-based systems (Linux, macOS, etc.):
Make sure to build with the server option on
```bash
LLAMA_BUILD_SERVER=1 make
```
```bash
./server -m models/7B/ggml-model.bin --ctx_size 2048