license: apache-2.0
tags:
  - diffusion-single-file
  - comfyui
  - distillation
  - LoRA
  - video
  - video genration
pipeline_tags:
  - image-to-video
  - text-to-video
base_model:
  - Wan-AI/Wan2.2-I2V-A14B
library_name: diffusers
pipeline_tag: image-to-video
π¬ Wan2.2 Distilled LoRA Models
β‘ High-Performance Video Generation with 4-Step Inference Using LoRA
LoRA weights extracted from Wan2.2 distilled models - Flexible deployment with excellent generation quality
π What's Special?
π¦ LoRA Model Catalog
π₯ Available LoRA Models
| Task Type | Noise Level | Model File | Rank | Purpose | 
|---|---|---|---|---|
| I2V | High Noise | wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_xxx.safetensors | 64 | More creative image-to-video | 
| I2V | Low Noise | wan2.2_i2v_A14b_low_noise_lora_rank64_lightx2v_4step_xxx.safetensors | 64 | More stable image-to-video | 
π‘ Note:
xxxin filenames represents version number or timestamp, please check HuggingFace repository for the latest version- These LoRAs must be used with Wan2.2 base models
π Usage
Prerequisites
Base Model: You need to prepare Wan2.2 I2V base model (original model without distillation)
Download base model (choose one):
Method 1: From LightX2V Official Repository (Recommended)
# Download high noise base model
huggingface-cli download lightx2v/Wan2.2-Official-Models \
    wan2.2_i2v_A14b_high_noise_lightx2v.safetensors \
    --local-dir ./models/Wan2.2-Official-Models
# Download low noise base model
huggingface-cli download lightx2v/Wan2.2-Official-Models \
    wan2.2_i2v_A14b_low_noise_lightx2v.safetensors \
    --local-dir ./models/Wan2.2-Official-Models
Method 2: From Wan-AI Official Repository
huggingface-cli download Wan-AI/Wan2.2-I2V-A14B \
    --local-dir ./models/Wan2.2-I2V-A14B
π‘ Note: lightx2v/Wan2.2-Official-Models provides separate high noise and low noise base models, download as needed
Method 1: LightX2V - Offline LoRA Merging (Recommended β)
Offline LoRA merging provides best performance and supports quantization simultaneously.
1.1 Download LoRA Models
# Download both LoRAs (high noise and low noise)
# Note: xxx represents version number, please check HuggingFace for actual filename
huggingface-cli download lightx2v/Wan2.2-Distill-Loras \
    wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_xxx.safetensors \
    wan2.2_i2v_A14b_low_noise_lora_rank64_lightx2v_4step_xxx.safetensors \
    --local-dir ./loras/
1.2 Merge LoRA (Basic Merging)
Merge LoRA:
cd LightX2V/tools/convert
# For directory-based base model: --source /path/to/Wan2.2-I2V-A14B/high_noise_model/
python converter.py \
    --source ./models/Wan2.2-Official-Models/wan2.2_i2v_A14b_high_noise_lightx2v.safetensors \
    --output /path/to/output/ \
    --output_ext .safetensors \
    --output_name wan2.2_i2v_A14b_high_noise_lightx2v_4step \
    --model_type wan_dit \
    --lora_path /path/to/loras/wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_xxx.safetensors \
    --lora_strength 1.0 \
    --single_file
# For directory-based base model: --source /path/to/Wan2.2-I2V-A14B/low_noise_model/
python converter.py \
    --source ./models/Wan2.2-Official-Models/wan2.2_i2v_A14b_low_noise_lightx2v.safetensors \
    --output /path/to/output/ \
    --output_ext .safetensors \
    --output_name wan2.2_i2v_A14b_low_noise_lightx2v_4step \
    --model_type wan_dit \
    --lora_path /path/to/loras/wan2.2_i2v_A14b_low_noise_lora_rank64_lightx2v_4step_xxx.safetensors \
    --lora_strength 1.0 \
    --single_file
1.3 Merge LoRA + Quantization (Recommended)
Merge LoRA + FP8 Quantization:
cd LightX2V/tools/convert
# For directory-based base model: --source /path/to/Wan2.2-I2V-A14B/high_noise_model/
python converter.py \
    --source ./models/Wan2.2-Official-Models/wan2.2_i2v_A14b_high_noise_lightx2v.safetensors \
    --output /path/to/output/ \
    --output_ext .safetensors \
    --output_name wan2.2_i2v_A14b_high_noise_scaled_fp8_e4m3_lightx2v_4step \
    --model_type wan_dit \
    --lora_path /path/to/loras/wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_xxx.safetensors \
    --lora_strength 1.0 \
    --quantized \
    --linear_dtype torch.float8_e4m3fn \
    --non_linear_dtype torch.bfloat16 \
    --single_file
# For directory-based base model: --source /path/to/Wan2.2-I2V-A14B/low_noise_model/
python converter.py \
    --source ./models/Wan2.2-Official-Models/wan2.2_i2v_A14b_low_noise_lightx2v.safetensors \
    --output /path/to/output/ \
    --output_ext .safetensors \
    --output_name wan2.2_i2v_A14b_low_noise_scaled_fp8_e4m3_lightx2v_4step \
    --model_type wan_dit \
    --lora_path /path/to/loras/wan2.2_i2v_A14b_low_noise_lora_rank64_lightx2v_4step_xxx.safetensors \
    --lora_strength 1.0 \
    --quantized \
    --linear_dtype torch.float8_e4m3fn \
    --non_linear_dtype torch.bfloat16 \
    --single_file
Merge LoRA + ComfyUI FP8 Format:
cd LightX2V/tools/convert
# For directory-based base model: --source /path/to/Wan2.2-I2V-A14B/high_noise_model/
python converter.py \
    --source ./models/Wan2.2-Official-Models/wan2.2_i2v_A14b_high_noise_lightx2v.safetensors \
    --output /path/to/output/ \
    --output_ext .safetensors \
    --output_name wan2.2_i2v_A14b_high_noise_scaled_fp8_e4m3_lightx2v_4step_comfyui \
    --model_type wan_dit \
    --lora_path /path/to/loras/wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_xxx.safetensors \
    --lora_strength 1.0 \
    --quantized \
    --linear_dtype torch.float8_e4m3fn \
    --non_linear_dtype torch.bfloat16 \
    --single_file \
    --comfyui_mode
# For directory-based base model: --source /path/to/Wan2.2-I2V-A14B/low_noise_model/
python converter.py \
    --source ./models/Wan2.2-Official-Models/wan2.2_i2v_A14b_low_noise_lightx2v.safetensors \
    --output /path/to/output/ \
    --output_ext .safetensors \
    --output_name wan2.2_i2v_A14b_low_noise_scaled_fp8_e4m3_lightx2v_4step_comfyui \
    --model_type wan_dit \
    --lora_path /path/to/loras/wan2.2_i2v_A14b_low_noise_lora_rank64_lightx2v_4step_xxx.safetensors \
    --lora_strength 1.0 \
    --quantized \
    --linear_dtype torch.float8_e4m3fn \
    --non_linear_dtype torch.bfloat16 \
    --single_file \
    --comfyui_mode
π Reference Documentation: For more merging options, see LightX2V Model Conversion Documentation
Method 2: LightX2V - Online LoRA Loading
Online LoRA loading requires no pre-merging, loads dynamically during inference, more flexible.
2.1 Download LoRA Models
# Download both LoRAs (high noise and low noise)
# Note: xxx represents version number, please check HuggingFace for actual filename
huggingface-cli download lightx2v/Wan2.2-Distill-Loras \
    wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_xxx.safetensors \
    wan2.2_i2v_A14b_low_noise_lora_rank64_lightx2v_4step_xxx.safetensors \
    --local-dir ./loras/
2.2 Use Configuration File
Reference configuration file: wan_moe_i2v_distil_with_lora.json
LoRA configuration example in config file:
{
    "lora_configs": [
        {
            "name": "high_noise_model",
            "path": "/path/to/loras/wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_xxx.safetensors",
            "strength": 1.0
        },
        {
            "name": "low_noise_model",
            "path": "/path/to/loras/wan2.2_i2v_A14b_low_noise_lora_rank64_lightx2v_4step_xxx.safetensors",
            "strength": 1.0
        }
    ]
}
π‘ Tip: Replace
xxxwith actual version number (e.g.,1022). Check HuggingFace repository for the latest version
2.3 Run Inference
Using I2V as example:
cd scripts
bash wan22/run_wan22_moe_i2v_distill.sh
Method 3: ComfyUI
Please refer to workflow
β οΈ Important Notes
- Base Model Requirement: These LoRAs must be used with Wan2.2-I2V-A14B base model, cannot be used standalone 
- Other Components: In addition to DIT model and LoRA, the following are also required at runtime: - T5 text encoder
- CLIP vision encoder
- VAE encoder/decoder
- Tokenizer
 - Please refer to LightX2V Documentation for how to organize complete model directory 
- Inference Configuration: When using 4-step inference, configure correct - denoising_step_list, recommended:- [1000, 750, 500, 250]
π Related Resources
Documentation Links
- LightX2V Quick Start: Quick Start Documentation
- Model Conversion Tool: Conversion Tool Documentation
- Online LoRA Loading: Configuration File Example
- Quantization Guide: Quantization Documentation
- Model Structure: Model Structure Documentation
Related Models
- Distilled Full Models: Wan2.2-Distill-Models
- Wan2.2 Official Models: Wan2.2-Official-Models - Contains high noise and low noise base models
- Base Model (Wan-AI): Wan2.2-I2V-A14B
π€ Community & Support
- GitHub Issues: https://github.com/ModelTC/LightX2V/issues
- HuggingFace: https://huggingface.co/lightx2v/Wan2.2-Distill-Loras
- LightX2V Homepage: https://github.com/ModelTC/LightX2V
If you find this project helpful, please give us a β on GitHub

