Commit graph

11 commits

Author SHA1 Message Date
Nicholas Albion 5b9e59bc07 speak scripts for Windows 2023-06-01 22:45:00 +10:00
Georgi Gerganov 77eab3fbfe
talk-llama : sync latest llama.cpp (close #922, close #954) 2023-05-23 14:04:39 +03:00
Georgi Gerganov 0cb820e0f9
talk-llama : fix build + sync latest llama.cpp 2023-05-14 18:46:42 +03:00
Luis Herrera 0bf680fea2
talk-llama : fix session prompt load (#854) 2023-05-02 20:05:27 +03:00
Luis Herrera be5911a9f3
talk-llama : add --session support (#845)
* feat: adding session support

* readme: adding --session info in examples/talk-llama

* llama: adding session fixes

* readme: updating session doc

* talk-llama: update the value of need_to_save_session to true in order to save the session in the subsequent interaction

* talk-llama: adding missing function which updates session_tokens
2023-05-01 20:18:10 +03:00
Georgi Gerganov 794b162a46
whisper : add integer quantization support (#540)
* whisper : add integer quantization support

* examples : add common-ggml + prepare to add "quantize" tool

* whisper : quantization tool ready

* whisper : fix F32 support

* whisper : try to fix shared lib linkage

* wasm : update quantized models to Q5

* bench.wasm : remove "medium" button

* bench.wasm : fix custom model button

* ggml : add Q5_0 and Q5_1 WASM SIMD

* wasm : add quantized models to all WASM examples

* wasm : bump DB version number to 2

* talk-llama : update example to latest llama.cpp

* node : increase test timeout to 10s

* readme : add information for model quantization

* wasm : add links to other examples
2023-04-30 18:51:57 +03:00
Maciek 78548dc03f
talk-llama : correct default speak.sh path (#720)
There is `speak.sh` file in `./examples/talk-llama` as described in README.
However `./examples/talk/speak.sh` is used in `talk-llama.cpp`, this commit corrects that.
2023-04-14 19:36:09 +03:00
Georgi Gerganov 114df388fe
talk-llama : increase context to 2048 2023-04-10 23:09:15 +03:00
InconsolableCellist 5e6e2187a3
talk-llama : fixing usage message for talk-llama (#687)
"-ml" instead of "-mg" for specifying the llama file
2023-03-30 00:10:20 +03:00
Evan Jones a47e812a54
talk-llama : add alpaca support (#668) 2023-03-29 23:01:14 +03:00
Georgi Gerganov 4a0deb8b1e
talk-llama : add new example + sync ggml from llama.cpp (#664)
* talk-llama : talk with LLaMA AI

* talk.llama : disable EOS token

* talk-llama : add README instructions

* ggml : fix build in debug
2023-03-27 21:00:32 +03:00