llama.cpp/examples/server/tests
Pierrick Hymbert 930b178026
server: logs - unified format and --log-format option (#5700)
* server: logs - always use JSON logger, add add thread_id in message, log task_id and slot_id

* server : skip GH copilot requests from logging

* server : change message format of server_log()

* server : no need to repeat log in comment

* server : log style consistency

* server : fix compile warning

* server : fix tests regex patterns on M2 Ultra

* server: logs: PR feedback on log level

* server: logs: allow to choose log format in json or plain text

* server: tests: output server logs in text

* server: logs switch init logs to server logs macro

* server: logs ensure value json value does not raised error

* server: logs reduce level VERBOSE to VERB to max 4 chars

* server: logs lower case as other log messages

* server: logs avoid static in general

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* server: logs PR feedback: change text log format to: LEVEL [function_name] message | additional=data

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-02-25 13:50:32 +01:00
..
features server: logs - unified format and --log-format option (#5700) 2024-02-25 13:50:32 +01:00
README.md server: logs - unified format and --log-format option (#5700) 2024-02-25 13:50:32 +01:00
requirements.txt server: concurrency fix + monitoring - add /metrics prometheus compatible endpoint (#5708) 2024-02-25 13:49:43 +01:00
tests.sh server: init functional tests (#5566) 2024-02-24 12:28:55 +01:00

Server tests

Python based server tests scenario using BDD and behave:

Tests target GitHub workflows job runners with 4 vCPU.

Requests are using aiohttp, asyncio based http client.

Note: If the host architecture inference speed is faster than GitHub runners one, parallel scenario may randomly fail. To mitigate it, you can increase values in n_predict, kv_size.

Install dependencies

pip install -r requirements.txt

Run tests

  1. Build the server
cd ../../..
mkdir build
cd build
cmake ../
cmake --build . --target server
  1. download required models:
    1. ../../../scripts/hf.sh --repo ggml-org/models --file tinyllamas/stories260K.gguf
  2. Start the test: ./tests.sh

It's possible to override some scenario steps values with environment variables:

  • PORT -> context.server_port to set the listening port of the server during scenario, default: 8080
  • LLAMA_SERVER_BIN_PATH -> to change the server binary path, default: ../../../build/bin/server
  • DEBUG -> "ON" to enable steps and server verbose mode --verbose
  • SERVER_LOG_FORMAT_JSON -> if set switch server logs to json format

Run @bug, @wip or @wrong_usage annotated scenario

Feature or Scenario must be annotated with @llama.cpp to be included in the default scope.

  • @bug annotation aims to link a scenario with a GitHub issue.
  • @wrong_usage are meant to show user issue that are actually an expected behavior
  • @wip to focus on a scenario working in progress

To run a scenario annotated with @bug, start: DEBUG=ON ./tests.sh --no-skipped --tags bug

After changing logic in steps.py, ensure that @bug and @wrong_usage scenario are updated.