Commit Graph

142 Commits (9a0b59d990be319952a4a02b9164b3b2327cd454)

Author SHA1 Message Date
Gavin Cai c713eb5e2a
readme : recommend MacOS Sonoma for Core ML (#1917) 2024-03-04 21:16:13 +02:00
Jumper775 917c56ded4
models : fix openvino setup info (#1874) 2024-02-19 02:19:47 +00:00
Michael Rienstra 4bbb60efce
docs : make model options / model install methods clearer (#1806)
* Make models more "discoverable"

* Clean up code block language identifiers

* make 3 options clearer

* undo Prettier formatter change

* docs: `$` shell prompt, consistently

* docs: minor changes
2024-01-26 17:39:54 +02:00
Georgi Gerganov 0b9af32a8b
release : v1.5.4 2024-01-05 17:11:27 +02:00
Georgi Gerganov 9962371f71
release : v1.5.3 2024-01-03 19:36:33 +02:00
Chaoqun d2ee117a0a
docker : Dockerize whisper.cpp (#1674)
* build: add dockerfile for ci

* ci: add action to build/push docker image

* fix: lowercase repository to fix ci

* ci: update cuBLAS flag

* build: install curl and ffmped in image

* docs: add docker section

* fix: improve args check when download model
2023-12-22 11:16:02 +00:00
Georgi Gerganov 88112c8afb
release : v1.5.2 2023-12-14 17:56:39 +02:00
fraxy-v fd99ece8e3
wchess : whisper assisted chess (#1595)
* wchess: whisper assisted chess

* wchess: fix allowed moves in check

* wchess: touchstart, touchend events

* wchess: css, disabled button

* wchess : html touches

* wchess : minor fixes and code style

* wchess : bump encoder context to 1280

* wchess : index.html

* wchess : fix CI warnings

* wchess : add array header

* wchess : build static library

* wchess : display grammar

* wchess : update UX

* wchess : add comment

* wchess : add README

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-12-14 15:58:26 +02:00
Hang 641f2f4282
readme : update help (#1560) 2023-11-27 12:04:08 +02:00
Georgi Gerganov 9d6ebd877c
release : v1.5.1 2023-11-24 12:41:55 +02:00
Georgi Gerganov 34209a37a2
readme : add server example 2023-11-23 17:20:33 +02:00
Georgi Gerganov d38af151a1
release : v1.5.0 2023-11-15 21:02:52 +02:00
Georgi Gerganov bfbaa4dce5
whisper : make large version explicit + fix data size units (#1493) 2023-11-15 19:42:25 +02:00
Georgi Gerganov 9f8bbd3fee
readme : update comment about source code 2023-11-12 17:47:37 +02:00
Georgi Gerganov 684bc8bd70
readme : update GPU / CUDA 2023-11-12 15:40:37 +02:00
Georgi Gerganov 6a5d195109
release : v1.4.3 2023-11-07 16:15:48 +02:00
Georgi Gerganov 2cdfc4e025
whisper : add support for large v3 (#1444)
* whisper : add support for large v3

* bench : fix build + fix go bindings

* bench : fix n_mels

* models : update readme
2023-11-07 15:30:18 +02:00
jorismertz 9a7074d4aa
README : fix typo (#1362) 2023-10-13 16:53:23 +01:00
Neil Chudleigh 9edbd0a204
extra: Add benchmark script implemented in Python (#1298)
* Create bench.py

* Various benchmark results

* Update benchmark script with hardware name, and file checks

* Remove old benchmark results

* Add git shorthash

* Round to 2 digits on calculated floats

* Fix the header reference when sorting results

* FIx order of models

* Parse file name

* Simplify filecheck

* Improve print run print statement

* Use simplified model name

* Update benchmark_results.csv

* Process single or lists of processors and threads

* Ignore benchmark results, dont check in

* Move bench.py to extra folder

* Readme section on how to use

* Move command to correct location

* Use separate list for models that exist

* Handle subprocess error in git short hash check

* Fix filtered models list initialization
2023-09-25 23:45:15 +08:00
JJ 7e1592d2cd
readme: Fix spelling error (#1290)
Fixed branding error:  Javascript to JavaScript
2023-09-21 15:55:33 +08:00
Artyom Mezin 903c9579b8
examples: Update README.md of main.cpp (#1306) 2023-09-18 22:14:36 +08:00
Georgi Gerganov 93935980f8
whisper : Metal and ggml-alloc support (#1270)
* metal : init

* whisper : factor out graph builds

* whisper : allocate encoder and decoder using ggml-alloc

* whisper : ggml-alloc is now supported

* whisper : CoreML support ggml-alloc

* build : fix ggml-alloc

* ios : update submodule

* extra : update sync-ggml.sh script to also sync ggml-alloc

* ci : see if this is causing the crash

* whisper : refactor ggml-alloc init

* whisper.android : try to fix build

* whisper : initial Metal version

* ci : try to debug vmem issue

* metal : decoder works on GPU!

* metal : add multi-decoder support

* ggml : fix ggml_nbytes (probably temp solution)

* metal : run "cross" step on the GPU

* whisper : remove ggml_repeat in the encoder

* whisper : offload the Encoder to Metal

* ggml : use simpler ggml_bytes() implementation

* ggml-alloc : try to make CI happy by reducing vram to 128GB

* whisper : add whisper_allocr to wrap ggml_allocr

* whisper : factor out alloc init in a function

* cmake : update to support Metal build

* whisper : add <functional> header

* objc : fix build (no Metal yet)

* ios : add Metal support

* swiftui : fix build

* metal : speed-up KQ multiplication

* metal : sync latest llama.cpp kernels

* readme : add Metal info

* ios : update submodule

* coreml : add code to toggle Core ML config (CPU, ANE, GPU)

* bench : fix timings by running a pre-heat

* bench : start benching the decoder

* whisper : add ggml_mul_mat_pad

* bench : fix uninitialized vars

* whisper : add comment for disabling mul-mat padding

* whisper : add description of ggml_mul_mat_pad

* whisper : clean-up ggml_mul_mat_pad

* metal : remove the "concurrent" flag

* bench : variable n_past

* ios : update SPM package
2023-09-15 12:18:18 +03:00
布客飞龙 6780c98e19
readme : update CMake build commands (#1231)
* Update README.md

* Update README.md: `vcpkg install opencl clblast`

* readme : update build commands

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-09-05 13:53:34 +03:00
Fangjun Kuang aad2dad38a
whisper : minor fixes (#1154) 2023-08-27 19:02:00 +03:00
Ryan Metcalfe 1fa360fc6e
readme : add OpenVINO support details (#1112) 2023-07-25 19:07:59 +03:00
Martin Warnaar 176d7e4e7b
readme : better wording (#1064) 2023-07-04 15:30:31 +03:00
Georgi Gerganov 70e6fcd78b
readme : add tinydiarize instructions (#1058) 2023-07-04 09:51:22 +03:00
GiviMAD bc2dcf85fe
readme : add java alternative binding (#1029)
Signed-off-by: Miguel Álvarez <miguelwork92@gmail.com>
2023-06-25 14:46:07 +03:00
Larry Battle a7f822ef59
readme : corrected syntax for markdown link (#995) 2023-06-25 13:46:44 +03:00
0xsourcecode 4e16a8fb63
readme : highlight OpenBLAS support (#956)
* highlight openblas support

* Update README.md
2023-05-24 11:23:51 +03:00
Nicholas Albion bc89f285d8
bindings : add java bindings (#931)
* WIP - java bindings

* updated README

* failed attempt at JNI

* fullTranscribe() test passes

* tested on Ubuntu 20

* link to Java bindings
2023-05-20 18:25:02 +03:00
Georgi Gerganov a5defbc1b9
release : v1.4.2 2023-05-14 19:06:45 +03:00
Jhen-Jie Hong 16564f554f
readme : improve Core ML model conversion guidance (#915) 2023-05-14 18:11:08 +03:00
Clifford Heath 9931d66400
readme : add instructions on converting to GGML + "--no-config" to wget (#874) 2023-05-08 20:58:36 +03:00
Vulcan 919e58b96a
readme : partial OpenCL GPU support via CLBlast (#863)
* ggml : CLBlast support as in llama.cpp

Building with CLBlast speeds up whisper.cpp ~2x on low end / older AMD APUs (CPU with integrated GPU) such as the A9.

Usage:
WHISPER_CLBLAST=1 make

* CMake/Makefile : CLBlast support as in llama.cpp

Building with CLBlast speeds up whisper.cpp ~2x on low end / older AMD APUs (CPU with integrated GPU) such as the A9.

Usage:
```
Makefile:
cd whisper.cpp
WHISPER_CLBLAST=1 make

CMake:
cd whisper.cpp ; mkdir build ; cd build
cmake -DWHISPER_CLBLAST=ON  ..
make
```

* Update README.md

Added OpenCL Build Instructions

* Instruction: Partial OpenCL GPU support via CLBlast

Added build instructions and examples for Make and CMake to support OpenCL enabled GPUs.
2023-05-03 19:24:43 +03:00
Georgi Gerganov 9c61f5f585
release : v1.4.1 2023-04-30 22:57:42 +03:00
Georgi Gerganov fa8dbdc888
release : v1.4.0 2023-04-30 19:23:37 +03:00
Georgi Gerganov 794b162a46
whisper : add integer quantization support (#540)
* whisper : add integer quantization support

* examples : add common-ggml + prepare to add "quantize" tool

* whisper : quantization tool ready

* whisper : fix F32 support

* whisper : try to fix shared lib linkage

* wasm : update quantized models to Q5

* bench.wasm : remove "medium" button

* bench.wasm : fix custom model button

* ggml : add Q5_0 and Q5_1 WASM SIMD

* wasm : add quantized models to all WASM examples

* wasm : bump DB version number to 2

* talk-llama : update example to latest llama.cpp

* node : increase test timeout to 10s

* readme : add information for model quantization

* wasm : add links to other examples
2023-04-30 18:51:57 +03:00
Georgi Gerganov 5fd1bdd7fc
whisper : add GPU support via cuBLAS (#834)
* make : add WHISPER_CUBLAS

* make : fix CUBLAS build

* whisper : disable Flash Attention + adjust memory buffers

* whisper : remove old commented code

* readme : add cuBLAS instructions

* cmake : add WHISPER_CUBLAS option

* gitignore : ignore build-cublas
2023-04-30 12:14:33 +03:00
Georgi Gerganov 4d89ee2e59
readme : add logo 2023-04-28 22:41:29 +03:00
Georgi Gerganov c23588cc4b
release : v1.3.0 2023-04-15 17:30:44 +03:00
Georgi Gerganov 355da83690
readme : fix link 2023-04-15 13:30:36 +03:00
Georgi Gerganov 3e5c49e59a
readme : add usage instructions for Core ML 2023-04-15 13:30:07 +03:00
Aaron Taylor 1c5edc3cb3
readme : add SwiftWhisper to listed bindings (#755) 2023-04-14 20:24:00 +03:00
Alex Evgrashin 674a8e579b
readme : add unity bindings (#733) 2023-04-14 19:59:44 +03:00
Sam b73a4638ac
readme : make the quick start instructions clearer. (#716)
Users wanting to make use of this implementation of the whisper model with no prior knowledge of C/C++ may download the Whisper model but fail to use of the "make" command as specified given that they forgot or didn't know they needed to clone the repository first. Hope this modification clears things up.
2023-04-14 19:33:06 +03:00
bocytko ccb47e7e10
readme : add shell command example for --print-colors (#710)
The section of the readme file explaining `--print-colors` includes only a screenshot with directories that are inconsistent with other examples. This commit adds an example shell command, consistent with the remaining examples.
2023-04-14 19:25:23 +03:00
Zigfrid Zvezdin 859ffc994e
misc : typo (#688) 2023-03-30 07:51:33 +03:00
Georgi Gerganov 82637b8e9f
readme : add talk-llama example to the table 2023-03-27 21:02:35 +03:00
jwijffels aec01bb337
Include link to R wrapper in README (#626) 2023-03-22 22:28:22 +02:00