llama.cpp/examples/server/tests
Pierrick Hymbert 9731134296
server: tests: passkey challenge / self-extend with context shift demo (#5832)
* server: tests: add models endpoint scenario

* server: /v1/models add some metadata

* server: tests: add debug field in context before scenario

* server: tests: download model from HF, add batch size

* server: tests: add passkey test

* server: tests: add group attention params

* server: do not truncate prompt tokens if self-extend through group attention is enabled

* server: logs: do not truncate log values

* server: tests - passkey - first good working value of nga

* server: tests: fix server timeout

* server: tests: fix passkey, add doc, fix regex content matching, fix timeout

* server: tests: fix regex content matching

* server: tests: schedule slow tests on master

* server: metrics: fix when no prompt processed

* server: tests: self-extend add llama-2-7B and Mixtral-8x7B-v0.1

* server: tests: increase timeout for completion

* server: tests: keep only the PHI-2 test

* server: tests: passkey add a negative test
2024-03-02 22:00:14 +01:00
..
features server: tests: passkey challenge / self-extend with context shift demo (#5832) 2024-03-02 22:00:14 +01:00
README.md server: tests: passkey challenge / self-extend with context shift demo (#5832) 2024-03-02 22:00:14 +01:00
requirements.txt server: tests: passkey challenge / self-extend with context shift demo (#5832) 2024-03-02 22:00:14 +01:00
tests.sh server: tests: passkey challenge / self-extend with context shift demo (#5832) 2024-03-02 22:00:14 +01:00

Server tests

Python based server tests scenario using BDD and behave:

Tests target GitHub workflows job runners with 4 vCPU.

Requests are using aiohttp, asyncio based http client.

Note: If the host architecture inference speed is faster than GitHub runners one, parallel scenario may randomly fail. To mitigate it, you can increase values in n_predict, kv_size.

Install dependencies

pip install -r requirements.txt

Run tests

  1. Build the server
cd ../../..
mkdir build
cd build
cmake ../
cmake --build . --target server
  1. Start the test: ./tests.sh

It's possible to override some scenario steps values with environment variables:

variable description
PORT context.server_port to set the listening port of the server during scenario, default: 8080
LLAMA_SERVER_BIN_PATH to change the server binary path, default: ../../../build/bin/server
DEBUG "ON" to enable steps and server verbose mode --verbose
SERVER_LOG_FORMAT_JSON if set switch server logs to json format
N_GPU_LAYERS number of model layers to offload to VRAM -ngl --n-gpu-layers

Run @bug, @wip or @wrong_usage annotated scenario

Feature or Scenario must be annotated with @llama.cpp to be included in the default scope.

  • @bug annotation aims to link a scenario with a GitHub issue.
  • @wrong_usage are meant to show user issue that are actually an expected behavior
  • @wip to focus on a scenario working in progress
  • @slow heavy test, disabled by default

To run a scenario annotated with @bug, start:

DEBUG=ON ./tests.sh --no-skipped --tags bug

After changing logic in steps.py, ensure that @bug and @wrong_usage scenario are updated.

./tests.sh --no-skipped --tags bug,wrong_usage || echo "should failed but compile"