Nice lora and suggestion for Relight Lora for Qwen Image Edit

#3
by YarvixPA - opened

Hi, nice lora this is very useful. I see in your HuggingFace profile you have a “Relight lora” would be a good idea to train for Qwen Image Edit 2509 and make a control for the light… Like IC Light that have the image to relight and a control mask to control the direction of the light (in this case should be a black and white image)

Hi, nice lora this is very useful. I see in your HuggingFace profile you have a “Relight lora” would be a good idea to train for Qwen Image Edit 2509 and make a control for the light… Like IC Light that have the image to relight and a control mask to control the direction of the light (in this case should be a black and white image)

Do you mean, you want me to train a lora to control light through black and white images? I will keep it in mind

Hi. Like IC-Light, which handled image fusion and also allowed lighting control.

I’m not sure if you’ve seen IC-Light, but it’s based on SD 1.5. I think Qwen Image Edit 2509 could potentially learn this kind of fusion and lighting control.

•	Image: The background image with the object to be composed (similar to a “fusion” LoRA).
•	Control Image: A black background with strokes to control the lighting direction. The stroke color also could  represent the light color (useful for controlling both direction and hue).

Hi. Like IC-Light, which handled image fusion and also allowed lighting control.

I’m not sure if you’ve seen IC-Light, but it’s based on SD 1.5. I think Qwen Image Edit 2509 could potentially learn this kind of fusion and lighting control.

•	Image: The background image with the object to be composed (similar to a “fusion” LoRA).
•	Control Image: A black background with strokes to control the lighting direction. The stroke color also could  represent the light color (useful for controlling both direction and hue).

I've used it before, but it wasn't very easy...

So let's discuss the control method. I remember that IC-Light uses a white brush directly on the image to serve as a reference for light.

The 2509 can train on multiple images. Now, if I want to train, should I create a dataset by directly painting on the image with a brush, or should I split it into two images: one original image and one with a white brush on a black background.

You should have 2 images:

  • Image: The background and with the object to fusion (like the dataset you have for this version of fusion)
  • Control: An image with black background and the stroke (like when masking in ComfyUI) but that strike can be a color not white only. So we can control with the direction of the stroke the light direction and the color the color of the light for the outout

You should have 2 images:

  • Image: The background and with the object to fusion (like the dataset you have for this version of fusion)
  • Control: An image with black background and the stroke (like when masking in ComfyUI) but that strike can be a color not white only. So we can control with the direction of the stroke the light direction and the color the color of the light for the outout

Excellent suggestion, I'll try creating a dataset soon.

Sign up or log in to comment