Only quants of https://huggingface.co/Phr00t/Qwen-Image-Edit-Rapid-AIO

No AIO, just NSFW and Lightning loras included, packaging VAE and text_encoder into GGUF is not possible ATM, needs the text encoder and the VAE.

READ CAREFULLY!!!

I personally recommend: Qwen2.5-VL-7B-Instruct-abliterated but the default text_encoder does also fine. I have cloned the abliberated files into my repo and fixed the mmproj ready to use.

HERE: https://huggingface.co/Phil2Sat/Qwen-Image-Edit-Rapid-AIO-GGUF/tree/main/Qwen2.5-VL-7B-Instruct-abliterated

!!! PLACE THE mmproj FILE NEXT TO THE Qwen2.5-VL-7B-Instruct-abliterated.xxx.gguf !!! "mat1 and mat2 shapes cannot be multiplied..." is the error you get if you don't

VAE https://huggingface.co/calcuis/pig-vae/blob/main/pig_qwen_image_vae_fp32-f16.gguf

Replace comfy_extras/nodes_qwen.py with https://huggingface.co/Phr00t/Qwen-Image-Edit-Rapid-AIO/blob/main/fixed-textencode-node/nodes_qwen.py

workflow text2image 4-step Q2_K: ComfyUI_00574_ workflow edit of this image 4-step Q2_K: ComfyUI_00601_

workflow edit of this image 4-step Q5_K_M: ComfyUI_00602_

Downloads last month
16,757
GGUF
Model size
8B params
Architecture
qwen2vl
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ 1 Ask for provider support

Model tree for Phil2Sat/Qwen-Image-Edit-Rapid-AIO-GGUF

Quantized
(1)
this model