llama.cpp/examples
Kerfuffle 1b78ed2081
Only show -ngl option when relevant + other doc/arg handling updates (#1625)
1. Add a `LLAMA_SUPPORTS_GPU_OFFLOAD` define to `llama.h` (defined when compiled with CLBlast or cuBLAS)
2. Update the argument handling in the common example code to only show the `-ngl`, `--n-gpu-layers` option when GPU offload is possible.
3. Add an entry for the `-ngl`, `--n-gpu-layers` option to the `main` and `server` examples documentation
4. Update `main` and `server` examples documentation to use the new style dash separator argument format
5. Update the `server` example to use dash separators for its arguments and adds `-ngl` to `--help` (only shown when compiled with appropriate support). It will still support `--memory_f32` and `--ctx_size` for compatibility.
6. Add a warning discouraging use of `--memory-f32` for the `main` and `server` examples `--help` text as well as documentation. Rationale: https://github.com/ggerganov/llama.cpp/discussions/1593#discussioncomment-6004356
2023-05-28 11:48:57 -06:00
..
baby-llama ggml : implement backward pass for llama + small training-llama-from-scratch example (#1360) 2023-05-13 15:56:40 +03:00
benchmark llama : add llama_init_backend() API (close #1527) 2023-05-20 11:06:37 +03:00
embedding llama : add llama_init_backend() API (close #1527) 2023-05-20 11:06:37 +03:00
jeopardy examples : add Jeopardy example (#1168) 2023-04-28 19:13:33 +03:00
main Only show -ngl option when relevant + other doc/arg handling updates (#1625) 2023-05-28 11:48:57 -06:00
perplexity llama : add llama_init_backend() API (close #1527) 2023-05-20 11:06:37 +03:00
quantize llama : add llama_init_backend() API (close #1527) 2023-05-20 11:06:37 +03:00
quantize-stats Remove unused n_parts parameter (#1509) 2023-05-17 22:12:01 +00:00
save-load-state Remove unused n_parts parameter (#1509) 2023-05-17 22:12:01 +00:00
server Only show -ngl option when relevant + other doc/arg handling updates (#1625) 2023-05-28 11:48:57 -06:00
alpaca.sh examples : Improve Alpaca Default Repeat Penalty: Better Match Alpaca.cpp Experience (#1107) 2023-04-22 09:54:33 +03:00
chat-13B.bat Create chat-13B.bat (#592) 2023-03-29 20:21:09 +03:00
chat-13B.sh examples : read chat prompts from a template file (#1196) 2023-05-03 20:58:11 +03:00
chat-persistent.sh chat-persistent.sh : use bracket expressions in grep (#1564) 2023-05-24 09:16:22 +03:00
chat.sh If n_predict == -1, generate forever 2023-03-25 21:51:41 +02:00
CMakeLists.txt examples : add server example with REST API (#1443) 2023-05-21 20:51:18 +03:00
common.cpp Only show -ngl option when relevant + other doc/arg handling updates (#1625) 2023-05-28 11:48:57 -06:00
common.h examples : add --alias option to gpt_params to set use friendly model name (#1614) 2023-05-28 20:14:24 +03:00
gpt4all.sh examples : add -n to alpaca and gpt4all scripts (#706) 2023-04-13 16:03:39 +03:00
Miku.sh examples : various prompt and example fixes (#1298) 2023-05-03 18:26:47 +03:00
reason-act.sh add example of re-act pattern (#583) 2023-03-29 10:10:24 -05:00