Commit Graph

108 Commits (master)

Author SHA1 Message Date
Grauho 48bcce493f
fix: avoid double free and fix sdxl lora naming conversion
* Fixed a double free issue when running multiple backends on the CPU, eg: CLIP
and the primary backend, as this would result in the *_backend pointers both
pointing to the same thing resulting in a segfault when calling the
StableDiffusionGGML destructor.

* Improve logging to allow for a color switch on the command line interface.
Changed the base log_printf function to not bake the log level directly into
the log buffer as that information is already passed the logging function via
the level parameter and it's easier to add in there than strip it out.

* Added a fix for certain SDXL LoRAs that don't seem to follow the expected
naming convention, converts over the tensor name during the LoRA model
loading. Added some logging of useful LoRA loading information. Had to
increase the base size of the GGML graph as the existing size results in an
insufficient graph memory error when using SDXL LoRAs.

* small fixes

---------

Co-authored-by: leejet <leejet714@gmail.com>
2024-03-20 22:00:22 +08:00
bssrdf a469688e30
feat: add TencentARC PhotoMaker support (#179)
* first efforts at implementing photomaker; lots more to do

* added PhotoMakerIDEncoder model in SD

* fixed soem bugs; now photomaker model weights can be loaded into their tensor buffers

* added input id image loading

* added preprocessing inpit id images

* finished get_num_tensors

* fixed a bug in remove_duplicates

* add a get_learned_condition_with_trigger function to do photomaker stuff

* add a convert_token_to_id function for photomaker to extract trigger word's token id

* making progress; need to implement tokenizer decoder

* making more progress; finishing vision model forward

* debugging vision_model outputs

* corrected clip vision model output

* continue making progress in id fusion process

* finished stacked id embedding; to be tested

* remove garbage file

* debuging graph compute

* more progress; now alloc buffer failed

* fixed wtype issue; input images can only be 1 because issue with transformer when batch size > 1 (to be investigated)

* added delayed subject conditioning; now photomaker runs and generates images

* fixed stat_merge_step

* added photomaker lora model (to be tested)

* reworked pmid lora

* finished applying pmid lora; to be tested

* finalized pmid lora

* add a few print tensor; tweak in sample again

* small tweak; still not getting ID faces

* fixed a bug in FuseBlock forward; also remove diag_mask op in for vision transformer; getting better results

* disable pmid lora apply for now; 1 input image seems working; > 1 not working

* turn pmid lora apply back on

* fixed a decode bug

* fixed a bug in ggml's conv_2d, and now > 1 input images working

* add style_ratio as a cli param; reworked encode with trigger for attention weights

* merge commit fixing lora free param buffer error

* change default style ratio to 10%

* added an option to offload vae decoder to CPU for mem-limited gpus

* removing image normalization step seems making ID fidelity much higher

* revert default style ratio back ro 20%

* added an option for normalizing input ID images; cleaned up debugging code

* more clean up

* fixed bugs; now failed with cuda error; likely out-of-mem on GPU

* free pmid model params when required

* photomaker working properly now after merging and adapting to GGMLBlock API

* remove tensor renaming;  fixing names in the photomaker model file

* updated README.md to include instructions and notes for running PhotoMaker

* a bit clean up

* remove -DGGML_CUDA_FORCE_MMQ; more clean up and README update

* add input image requirement in README

* bring back freeing pmid lora params buffer; simply pooled output of CLIPvision

* remove MultiheadAttention2; customized MultiheadAttention

* added a WIN32 get_files_from_dir; turn off Photomakder if receiving no input images

* update docs

* fix ci error

* make stable-diffusion.h a pure c header file

This reverts commit 27887b630d.

* fix ci error

* format code

* reuse get_learned_condition

* reuse pad_tokens

* reuse CLIPVisionModel

* reuse LoraModel

* add --clip-on-cpu

* fix lora name conversion for SDXL

---------

Co-authored-by: bssrdf <bssrdf@gmail.com>
Co-authored-by: leejet <leejet714@gmail.com>
2024-03-12 23:15:17 +08:00
leejet 61980171a1 sync: update ggml 2024-03-10 17:23:11 +08:00
Cyberhan123 583cc5bba2
docs: add binding (#189) 2024-03-03 13:27:07 +08:00
Phu Tran 1ce9470f27
fix: fix building shared library (#188) 2024-03-03 13:24:59 +08:00
leejet a65c410463 sync: update ggml 2024-03-02 19:49:41 +08:00
leejet a17ae7b7d2 sync: update ggml 2024-03-02 19:23:11 +08:00
leejet e1b37b4ef6 fix: update ggml submodule url 2024-03-02 17:34:08 +08:00
fszontagh 7be65faa7c
feat: add progress callback (#170) 2024-03-02 17:28:41 +08:00
Phu Tran d164236b2a
fix: fix metal build issues (#183) 2024-03-02 17:17:57 +08:00
leejet ef5c3f7401 feat: add support for prompt longer than 77 2024-03-02 17:13:18 +08:00
Cyberhan123 b7870a0f89
chore: improve ci (#150)
---------

Co-authored-by: leejet <leejet714@gmail.com>
2024-02-26 22:01:34 +08:00
leejet 4a8190405a fix: fix the issue with dynamic linking 2024-02-25 21:39:01 +08:00
leejet 730585d515
sync: update ggml (#180) 2024-02-25 21:11:01 +08:00
Sean Bailey 193fb620b1
feat: add capability to repeatedly run the upscaler in a row (#174)
* Add in upscale repeater logic

---------

Co-authored-by: leejet <leejet714@gmail.com>
2024-02-24 21:31:01 +08:00
leejet b6368868d9
feat: introduce GGMLBlock and implement SVD(Broken) (#159)
* introduce GGMLBlock and implement SVD(Broken)

* add sdxl vae warning
2024-02-24 20:06:39 +08:00
leejet 349439f239 style: format code 2024-01-29 23:05:18 +08:00
Steward Garcia 36ec16ac99
feat: Control Net support + Textual Inversion (embeddings) (#131)
* add controlnet to pipeline

* add cli params

* control strength cli param

* cli param keep controlnet in cpu

* add Textual Inversion

* add canny preprocessor

* refactor: change ggml_type_sizef to ggml_row_size

* process hint once time

* ignore the embedding name case

---------

Co-authored-by: leejet <leejet714@gmail.com>
2024-01-29 22:38:51 +08:00
旺旺碎冰冰 c6071fa82f
feat: add hipBlas support (#94) 2024-01-14 11:53:42 +08:00
leejet 5c614e4bc2
feat: add convert api (#142) 2024-01-14 11:43:24 +08:00
leejet 2b6ec97fe2
sync: update ggml (#134) 2024-01-05 23:18:41 +08:00
leejet db382348cc fix: change GGML_MAX_NAME to 128 2024-01-03 22:42:42 +08:00
leejet 7cb41b190f fix: avoid encountering 'std::set undefined' in some environments 2024-01-02 22:37:01 +08:00
leejet 7fb8a51318 chore: make SD_BUILD_DLL visible only to SD_LIB 2024-01-02 22:31:40 +08:00
leejet 2c5f3fc53a chore: add support for building shared library 2024-01-02 21:05:44 +08:00
Erik Scholz f2e4d9793b
fix: avoid some memory leaks (#136)
---------

Co-authored-by: leejet <leejet714@gmail.com>
2024-01-01 23:27:29 +08:00
Erik Scholz 4a5e7b58e2
fix: never use a log message as a format string (#135) 2024-01-01 20:43:47 +08:00
leejet 2e79a82f85
refactor: reorganize code and use c api (#133) 2024-01-01 16:22:18 +08:00
leejet b139434b57 docs: update README.md 2023-12-31 11:48:41 +08:00
leejet 14da17a923 fix: initialize some pointers to NULL 2023-12-30 14:24:45 +08:00
leejet 78ad76f3f4
feat: add SDXL support (#117)
* add SDXL support

* fix the issue with generating large images
2023-12-29 00:16:10 +08:00
Steward Garcia 004dfbef27
feat: implement ESRGAN upscaler + Metal Backend (#104)
* add esrgan upscaler

* add sd_tiling

* support metal backend

* add clip_skip

---------

Co-authored-by: leejet <leejet714@gmail.com>
2023-12-28 23:46:48 +08:00
旺旺碎冰冰 0e64238e4c
feat: implement the complete bpe function (#119)
* implement the complete bpe function
---------

Co-authored-by: leejet <leejet714@gmail.com>
2023-12-23 12:11:07 +08:00
leejet 8f6b4a39d6
fix: enhance the tokenizer's handing of Unicode (#120) 2023-12-21 00:22:03 +08:00
Kreijstal 9842a3f819
fix: add support for int32_t on other compilers (#114) 2023-12-11 23:32:39 +08:00
leejet ac8f5a044c feat: add SD-Turbo support 2023-12-10 13:15:09 +08:00
Sam Jones ca33304318
fix: remove dangling pointer to work_output in CLIPTextModel (#111) 2023-12-10 10:05:02 +08:00
leejet 69efe3ce2b chore: make code cleaner 2023-12-09 17:35:10 +08:00
leejet 2eac844bbd fix: generate image correctly in img2img mode 2023-12-09 14:39:43 +08:00
leejet 968226abb2 docs: update v2-1_768-nonema-pruned.safetensors url 2023-12-05 22:52:19 +08:00
Steward Garcia 134883aec4
feat: add TAESD implementation - faster autoencoder (#88)
* add taesd implementation

* taesd gpu offloading

* show seed when generating image with -s -1

* less restrictive with larger images

* cuda: im2col speedup x2

* cuda: group norm speedup x90

* quantized models now works in cuda :)

* fix cal mem size

---------

Co-authored-by: leejet <leejet714@gmail.com>
2023-12-05 22:40:03 +08:00
leejet f99bcd1f76 fix: detect model format base on file content 2023-12-03 20:30:31 +08:00
leejet 8a87b273ad fix: allow model and vae using different format 2023-12-03 17:12:04 +08:00
leejet d7af2c2ba9
feat: load weights from safetensors and ckpt (#101) 2023-12-03 15:47:20 +08:00
旺旺碎冰冰 47dd704198
fix: avoid build fail on msvc (#93) 2023-11-28 20:49:11 +08:00
Erik Scholz f469b835a3
fix: reading memory of stack allocated object past its scope (#91) 2023-11-27 21:37:12 +08:00
Steward Garcia 8124588cf1
feat: ggml-alloc integration and gpu acceleration (#75)
* set ggml url to FSSRepo/ggml

* ggml-alloc integration

* offload all functions to gpu

* gguf format + native converter

* merge custom vae to a model

* full offload to gpu

* improve pretty progress

---------

Co-authored-by: leejet <leejet714@gmail.com>
2023-11-26 19:02:36 +08:00
Erik Scholz c874063408
fix: support bf16 lora weights (#82) 2023-11-20 22:34:17 +08:00
Urs Ganse ae1d5dcebb
feat: allow LoRAs with negative multiplier (#83)
* Allow Loras with negative weight, too.

There are a couple of loras, which serve to adjust certain concepts in
both positive and negative directions (like exposure, detail level etc).

The current code rejects them if loaded with a negative weight, but I
suggest that this check can simply be dropped.

* ignore lora in the case of multiplier == 0.f

---------

Co-authored-by: Urs Ganse <urs@nerd2nerd.org>
Co-authored-by: leejet <leejet714@gmail.com>
2023-11-20 22:23:52 +08:00
leejet 51b53d4cb1 chore: typo remote => remove 2023-11-19 23:21:49 +08:00