whisper.cpp/examples/stream
Jhen-Jie Hong 0463028bc2
whisper : add context param to disable gpu (#1293)
* whisper : check state->ctx_metal not null

* whisper : add whisper_context_params { use_gpu }

* whisper : new API with params & deprecate old API

* examples : use no-gpu param && whisper_init_from_file_with_params

* whisper.objc : enable metal & disable on simulator

* whisper.swiftui, metal : enable metal & support load default.metallib

* whisper.android : use new API

* bindings : use new API

* addon.node : fix build & test

* bindings : updata java binding

* bindings : add missing whisper_context_default_params_by_ref WHISPER_API for java

* metal : use SWIFTPM_MODULE_BUNDLE for GGML_SWIFT and reuse library load

* metal : move bundle var into block

* metal : use SWIFT_PACKAGE instead of GGML_SWIFT

* style : minor updates

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-11-06 11:04:24 +02:00
..
CMakeLists.txt whisper : add GPU support via cuBLAS (#834) 2023-04-30 12:14:33 +03:00
README.md README : Update README in stream to clarify where to compile from (Issue #1400) 2023-10-29 17:11:13 +00:00
stream.cpp whisper : add context param to disable gpu (#1293) 2023-11-06 11:04:24 +02:00

stream

This is a naive example of performing real-time inference on audio from your microphone. The stream tool samples the audio every half a second and runs the transcription continously. More info is available in issue #10.

./stream -m ./models/ggml-base.en.bin -t 8 --step 500 --length 5000

https://user-images.githubusercontent.com/1991296/194935793-76afede7-cfa8-48d8-a80f-28ba83be7d09.mp4

Sliding window mode with VAD

Setting the --step argument to 0 enables the sliding window mode:

 ./stream -m ./models/ggml-small.en.bin -t 6 --step 0 --length 30000 -vth 0.6

In this mode, the tool will transcribe only after some speech activity is detected. A very basic VAD detector is used, but in theory a more sophisticated approach can be added. The -vth argument determines the VAD threshold - higher values will make it detect silence more often. It's best to tune it to the specific use case, but a value around 0.6 should be OK in general. When silence is detected, it will transcribe the last --length milliseconds of audio and output a transcription block that is suitable for parsing.

Building

The stream tool depends on SDL2 library to capture audio from the microphone. You can build it like this:

# Install SDL2 on Linux
sudo apt-get install libsdl2-dev

# Install SDL2 on Mac OS
brew install sdl2

make stream

Ensure you are at the root of the repo when running make stream. Not within the examples/stream dir as the libraries needed like common-sdl.h are located within examples. Attempting to compile within examples/steam means your compiler cannot find them and it gives an error it cannot find the file.

whisper.cpp/examples/stream$ make stream
g++     stream.cpp   -o stream
stream.cpp:6:10: fatal error: common/sdl.h: No such file or directory
    6 | #include "common/sdl.h"
      |          ^~~~~~~~~~~~~~
compilation terminated.
make: *** [<builtin>: stream] Error 1

Web version

This tool can also run in the browser: examples/stream.wasm