llama.cpp/examples/llava
Georgi Gerganov 4760e7cc0b
sync : ggml (backend v2) (#3912)
* sync : ggml (backend v2) (wip)

* sync : migrate examples and llama.cpp to dynamic graphs (wip)

* sync : update tests + fix max op params to 64

ggml-ci

* sync : ggml-cuda

ggml-ci

* llama : fix save/load state context size

ggml-ci

* sync : try to fix build on tvOS

* sync : pass custom graph sizes in training examples

* sync : update graph copies to new ggml API

* sync : update sync-ggml.sh with new files

* scripts : fix header in sync script

* train : fix context size calculations

* llama : increase inference graph size up to 4096 nodes

* train : allocate grads for backward graphs

* train : allocate grads for gb_tmp
2023-11-13 14:16:23 +02:00
..
clip.cpp sync : ggml (backend v2) (#3912) 2023-11-13 14:16:23 +02:00
clip.h llava : expose as a shared library for downstream projects (#3613) 2023-11-07 00:36:23 +03:00
CMakeLists.txt llava : expose as a shared library for downstream projects (#3613) 2023-11-07 00:36:23 +03:00
convert-image-encoder-to-gguf.py examples: support LLaVA v1.5 (multimodal model) (#3436) 2023-10-12 18:23:18 +03:00
llava-cli.cpp Use params when loading models in llava-cli (#3976) 2023-11-07 10:43:59 +03:00
llava-surgery.py multimodal : add BakLLaVA conversion support (#3682) 2023-10-19 19:40:41 +03:00
llava.cpp llava : expose as a shared library for downstream projects (#3613) 2023-11-07 00:36:23 +03:00
llava.h llava : expose as a shared library for downstream projects (#3613) 2023-11-07 00:36:23 +03:00
README.md llava : expose as a shared library for downstream projects (#3613) 2023-11-07 00:36:23 +03:00

LLaVA

Currently this implementation supports llava-v1.5 variants.

The pre-converted 7b and 13b models are available.

After API is confirmed, more models will be supported / uploaded.

Usage

Build with cmake or run make llava-cli to build it.

After building, run: ./llava-cli to see the usage. For example:

./llava-cli -m llava-v1.5-7b/ggml-model-q5_k.gguf --mmproj llava-v1.5-7b/mmproj-model-f16.gguf --image path/to/an/image.jpg

note: A lower temperature like 0.1 is recommended for better quality. add --temp 0.1 to the command to do so.

Model conversion

  • Clone llava-v15-7b`` and clip-vit-large-patch14-336`` locally:
git clone https://huggingface.co/liuhaotian/llava-v1.5-7b

git clone https://huggingface.co/openai/clip-vit-large-patch14-336
  1. Use llava-surgery.py to split the LLaVA model to LLaMA and multimodel projector constituents:
python ./examples/llava/llava-surgery.py -m ../llava-v1.5-7b
  1. Use convert-image-encoder-to-gguf.py to convert the LLaVA image encoder to GGUF:
python ./examples/llava/convert-image-encoder-to-gguf -m ../clip-vit-large-patch14-336 --llava-projector ../llava-v1.5-7b/llava.projector --output-dir ../llava-v1.5-7b
  1. Use convert.py to convert the LLaMA part of LLaVA to GGUF:
python ./convert.py ../llava-v1.5-7b

Now both the LLaMA part and the image encoder is in the llava-v1.5-7b directory.

TODO

  • Support non-CPU backend for the image encoding part.
  • Support different sampling methods.
  • Support more model variants.