llama.cpp/examples/embd-input
Przemysław Pawełczyk cb6c44c5e0
build : do not use _GNU_SOURCE gratuitously (#2035)
* Do not use _GNU_SOURCE gratuitously.

What is needed to build llama.cpp and examples is availability of
stuff defined in The Open Group Base Specifications Issue 6
(https://pubs.opengroup.org/onlinepubs/009695399/) known also as
Single Unix Specification v3 (SUSv3) or POSIX.1-2001 + XSI extensions,
plus some stuff from BSD that is not specified in POSIX.1.

Well, that was true until NUMA support was added recently,
so enable GNU libc extensions for Linux builds to cover that.

Not having feature test macros in source code gives greater flexibility
to those wanting to reuse it in 3rd party app, as they can build it with
FTMs set by Makefile here or other FTMs depending on their needs.

It builds without issues in Alpine (musl libc), Ubuntu (glibc), MSYS2.

* make : enable Darwin extensions for macOS to expose RLIMIT_MEMLOCK

* make : enable BSD extensions for DragonFlyBSD to expose RLIMIT_MEMLOCK

* make : use BSD-specific FTMs to enable alloca on BSDs

* make : fix OpenBSD build by exposing newer POSIX definitions

* cmake : follow recent FTM improvements from Makefile
2023-09-08 15:09:21 +03:00
..
.gitignore llama : support input embeddings directly (#1910) 2023-06-28 18:53:37 +03:00
CMakeLists.txt cmake : install targets (#2256) 2023-07-19 10:01:11 +03:00
embd-input-lib.cpp build : do not use _GNU_SOURCE gratuitously (#2035) 2023-09-08 15:09:21 +03:00
embd-input-test.cpp llama : support input embeddings directly (#1910) 2023-06-28 18:53:37 +03:00
embd-input.h embd-input : fix returning ptr to temporary 2023-07-01 18:46:00 +03:00
embd_input.py chmod : make scripts executable (#2675) 2023-08-23 17:29:09 +03:00
llava.py chmod : make scripts executable (#2675) 2023-08-23 17:29:09 +03:00
minigpt4.py chmod : make scripts executable (#2675) 2023-08-23 17:29:09 +03:00
panda_gpt.py chmod : make scripts executable (#2675) 2023-08-23 17:29:09 +03:00
README.md examples : fixed path typos in embd-input (#2214) 2023-07-14 21:40:05 +03:00

Examples for input embedding directly

Requirement

build libembdinput.so run the following comman in main dir (../../).

make

LLaVA example (llava.py)

  1. Obtian LLaVA model (following https://github.com/haotian-liu/LLaVA/ , use https://huggingface.co/liuhaotian/LLaVA-13b-delta-v1-1/).
  2. Convert it to ggml format.
  3. llava_projection.pth is pytorch_model-00003-of-00003.bin.
import torch

bin_path = "../LLaVA-13b-delta-v1-1/pytorch_model-00003-of-00003.bin"
pth_path = "./examples/embd-input/llava_projection.pth"

dic = torch.load(bin_path)
used_key = ["model.mm_projector.weight","model.mm_projector.bias"]
torch.save({k: dic[k] for k in used_key}, pth_path)
  1. Check the path of LLaVA model and llava_projection.pth in llava.py.

PandaGPT example (panda_gpt.py)

  1. Obtian PandaGPT lora model from https://github.com/yxuansu/PandaGPT. Rename the file to adapter_model.bin. Use convert-lora-to-ggml.py to convert it to ggml format. The adapter_config.json is
{
  "peft_type": "LORA",
  "fan_in_fan_out": false,
  "bias": null,
  "modules_to_save": null,
  "r": 32,
  "lora_alpha": 32,
  "lora_dropout": 0.1,
  "target_modules": ["q_proj", "k_proj", "v_proj", "o_proj"]
}
  1. Papare the vicuna v0 model.
  2. Obtain the ImageBind model.
  3. Clone the PandaGPT source.
git clone https://github.com/yxuansu/PandaGPT
  1. Install the requirement of PandaGPT.
  2. Check the path of PandaGPT source, ImageBind model, lora model and vicuna model in panda_gpt.py.

MiniGPT-4 example (minigpt4.py)

  1. Obtain MiniGPT-4 model from https://github.com/Vision-CAIR/MiniGPT-4/ and put it in embd-input.
  2. Clone the MiniGPT-4 source.
git clone https://github.com/Vision-CAIR/MiniGPT-4/
  1. Install the requirement of PandaGPT.
  2. Papare the vicuna v0 model.
  3. Check the path of MiniGPT-4 source, MiniGPT-4 model and vicuna model in minigpt4.py.