From 9dda13e5e1f70bdfc25fbc0f0378f27c8b67e983 Mon Sep 17 00:00:00 2001 From: Srinivas Billa Date: Thu, 15 Jun 2023 18:36:38 +0100 Subject: [PATCH] readme : server compile flag (#1874) Explicitly include the server make instructions for C++ noobsl like me ;) --- examples/server/README.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/examples/server/README.md b/examples/server/README.md index 7dabac9cf..3b111655a 100644 --- a/examples/server/README.md +++ b/examples/server/README.md @@ -16,6 +16,10 @@ This example allow you to have a llama.cpp http server to interact from a web pa To get started right away, run the following command, making sure to use the correct path for the model you have: #### Unix-based systems (Linux, macOS, etc.): +Make sure to build with the server option on +```bash +LLAMA_BUILD_SERVER=1 make +``` ```bash ./server -m models/7B/ggml-model.bin --ctx_size 2048