whisper.cpp/examples/stream.wasm
Georgi Gerganov 794b162a46
whisper : add integer quantization support (#540)
* whisper : add integer quantization support

* examples : add common-ggml + prepare to add "quantize" tool

* whisper : quantization tool ready

* whisper : fix F32 support

* whisper : try to fix shared lib linkage

* wasm : update quantized models to Q5

* bench.wasm : remove "medium" button

* bench.wasm : fix custom model button

* ggml : add Q5_0 and Q5_1 WASM SIMD

* wasm : add quantized models to all WASM examples

* wasm : bump DB version number to 2

* talk-llama : update example to latest llama.cpp

* node : increase test timeout to 10s

* readme : add information for model quantization

* wasm : add links to other examples
2023-04-30 18:51:57 +03:00
..
CMakeLists.txt cmake : update to 3.19 (#351) 2023-01-05 21:22:48 +02:00
emscripten.cpp Improve decoding (#291) 2023-01-15 11:29:57 +02:00
index-tmpl.html whisper : add integer quantization support (#540) 2023-04-30 18:51:57 +03:00
README.md stream.wasm : add web-based real-time transcription (#112) 2022-11-25 23:57:46 +02:00

stream.wasm

Real-time transcription in the browser using WebAssembly

Online demo: https://whisper.ggerganov.com/stream/

Build instructions

# build using Emscripten (v3.1.2)
git clone https://github.com/ggerganov/whisper.cpp
cd whisper.cpp
mkdir build-em && cd build-em
emcmake cmake ..
make -j

# copy the produced page to your HTTP path
cp bin/stream.wasm/*       /path/to/html/
cp bin/libstream.worker.js /path/to/html/