Commit graph

11 commits

Author SHA1 Message Date
Matvey Soloviev 460c482540 Fix token count accounting 2023-03-13 01:04:41 +01:00
Matvey Soloviev 404fac0d62
Fix color getting reset before prompt output done (#65)
(cherry picked from commit 7eb2987619feee04c40eff69b604017d09919cb6)
2023-03-13 00:07:34 +02:00
Matvey Soloviev 96ea727f47
Add interactive mode (#61)
* Initial work on interactive mode.

* Improve interactive mode. Make rev. prompt optional.

* Update README to explain interactive mode.

* Fix OS X build
2023-03-12 23:13:28 +02:00
beiller 02f0c6fe7f
Add back top_k (#56)
* Add back top_k

* Update utils.cpp

* Update utils.h

---------

Co-authored-by: Bill Hamilton <bill.hamilton@shopify.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-12 22:23:15 +02:00
Sebastián A eb062bb012
Windows fixes (#31)
* Apply fixes suggested to build on windows

Issue: https://github.com/ggerganov/llama.cpp/issues/22

* Remove unsupported VLAs

* MSVC: Remove features that are only available on MSVC C++20.

* Fix zero initialization of the other fields.

* Change the use of vector for stack allocations.
2023-03-12 22:15:00 +02:00
beiller 129c7d1ea8
Add repetition penalty (#20)
* Adding repeat penalization

* Update utils.h

* Update utils.cpp

* Numeric fix

Should probably still scale by temp even if penalized

* Update comments, more proper application

I see that numbers can go negative so a fix from a referenced commit

* Minor formatting

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-12 11:27:42 +02:00
Georgi Gerganov 7d9ed7b25f
Bump memory buffer 2023-03-11 12:45:01 +02:00
Georgi Gerganov 007a8f6f45
Support all LLaMA models + change Q4_0 quantization storage 2023-03-11 11:28:30 +02:00
Georgi Gerganov 70bc0b8b15
Fix a bug in the rope calculation 2023-03-10 23:46:57 +02:00
Georgi Gerganov 319cdb3e1f
Final touches 2023-03-10 21:50:46 +02:00
Georgi Gerganov 26c0846629
Initial release 2023-03-10 20:56:40 +02:00