llama.cpp/examples
Randall Fitzgerald 794db3e7b9
Server Example Refactor and Improvements (#1570)
A major rewrite for the server example.

Note that if you have built something on the previous server API, it will probably be incompatible.
Check out the examples for how a typical chat app could work.

This took a lot of effort, there are 24 PR's closed in the submitter's repo alone, over 160 commits and a lot of comments and testing.

Summary of the changes:

- adds missing generation parameters: tfs_z, typical_p, repeat_last_n, repeat_penalty, presence_penalty, frequency_penalty, mirostat, penalize_nl, seed, ignore_eos
- applies missing top k sampler
- removes interactive mode/terminal-like behavior, removes exclude parameter
- moves threads and batch size to server command-line parameters
- adds LoRA loading and matches command line parameters with main example
- fixes stopping on EOS token and with the specified token amount with n_predict 
- adds server timeouts, host, and port settings
- adds expanded generation complete response; adds generation settings, stop reason, prompt truncated, model used, and final text
- sets defaults for unspecified parameters between requests
- removes /next-token endpoint and as_loop parameter, adds stream parameter and server-sent events for streaming
- adds CORS headers to responses
- adds request logging, exception printing and optional verbose logging
- adds better stopping words handling when matching multiple tokens and while streaming, or when it finishes on a partial stop string
- adds printing an error when it can't bind to the host/port specified
- fixes multi-byte character handling and replaces invalid UTF-8 characters on responses
- prints timing and build info on startup
- adds logit bias to request parameters
- removes embedding mode
- updates documentation; adds streaming Node.js and Bash examples
- fixes code formatting
- sets server threads to 1 since the current global state doesn't work well with simultaneous requests
- adds truncation of the input prompt and better context reset
- removes token limit from the input prompt
- significantly simplified the logic and removed a lot of variables

---------

Co-authored-by: anon998 <131767832+anon998@users.noreply.github.com>
Co-authored-by: Henri Vasserman <henv@hot.ee>
Co-authored-by: Felix Hellmann <privat@cirk2.de>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Lesaun Harvey <Lesaun@gmail.com>
2023-06-17 14:53:04 +03:00
..
baby-llama build : fix and ignore MSVC warnings (#1889) 2023-06-16 21:23:53 +03:00
benchmark build : fix and ignore MSVC warnings (#1889) 2023-06-16 21:23:53 +03:00
embedding build : fix and ignore MSVC warnings (#1889) 2023-06-16 21:23:53 +03:00
jeopardy hooks : setting up flake8 and pre-commit hooks (#1681) 2023-06-17 13:32:48 +03:00
main Fixed possible macro redefinition (#1892) 2023-06-16 21:25:01 +03:00
metal llama : Metal inference (#1642) 2023-06-04 23:34:30 +03:00
perplexity build : fix and ignore MSVC warnings (#1889) 2023-06-16 21:23:53 +03:00
quantize Allow "quantizing" to f16 and f32 (#1787) 2023-06-13 04:23:23 -06:00
quantize-stats build : fix and ignore MSVC warnings (#1889) 2023-06-16 21:23:53 +03:00
save-load-state build : fix and ignore MSVC warnings (#1889) 2023-06-16 21:23:53 +03:00
server Server Example Refactor and Improvements (#1570) 2023-06-17 14:53:04 +03:00
simple examples : add "simple" (#1840) 2023-06-16 21:58:09 +03:00
train-text-from-scratch train : get raw text instead of page with html (#1905) 2023-06-17 09:51:54 +03:00
alpaca.sh examples : Improve Alpaca Default Repeat Penalty: Better Match Alpaca.cpp Experience (#1107) 2023-04-22 09:54:33 +03:00
chat-13B.bat Create chat-13B.bat (#592) 2023-03-29 20:21:09 +03:00
chat-13B.sh examples : read chat prompts from a template file (#1196) 2023-05-03 20:58:11 +03:00
chat-persistent.sh chat-persistent.sh : use bracket expressions in grep (#1564) 2023-05-24 09:16:22 +03:00
chat-vicuna.sh examples : add chat-vicuna.sh (#1854) 2023-06-15 21:05:53 +03:00
chat.sh If n_predict == -1, generate forever 2023-03-25 21:51:41 +02:00
CMakeLists.txt train : improved training-from-scratch example (#1652) 2023-06-13 22:04:40 +03:00
common.cpp build : fix and ignore MSVC warnings (#1889) 2023-06-16 21:23:53 +03:00
common.h CUDA full GPU acceleration, KV cache in VRAM (#1827) 2023-06-14 19:47:19 +02:00
gpt4all.sh examples : add -n to alpaca and gpt4all scripts (#706) 2023-04-13 16:03:39 +03:00
Miku.sh examples : various prompt and example fixes (#1298) 2023-05-03 18:26:47 +03:00
reason-act.sh add example of re-act pattern (#583) 2023-03-29 10:10:24 -05:00