Spaces:
Runtime error
title: Pixagram (stable)
emoji: ๐ฎ
colorFrom: purple
colorTo: pink
sdk: gradio
sdk_version: 5.49.1
app_file: app.py
pinned: true
license: mit
short_description: Transform any images including portrait into real pixel art!
disable_embedding: false
cross-origin-embedder-policy: cross-origin
cross-origin-opener-policy: cross-origin
cross-origin-resource-policy: cross-origin
๐ฎ Pixagram Converter
Convert any image into stunning retro game art using advanced AI models!
Features
- Custom SDXL Checkpoint: Uses the "Horizon" model optimized for artistic generation
- Pixelate VAE: Custom VAE that creates authentic 8x pixelated retro aesthetic
- RetroArt LORA: Style-specific LORA for enhanced retro game art look
- Face Preservation: Automatically detects and preserves facial features using InstantID with Antelopev2
- Depth-Aware: Uses ControlNet Zoe Depth to maintain realistic depth in the output
- Aspect Ratio Preservation: Maintains the original image proportions
๐ค Models
All custom models are loaded from the HuggingFace Hub repository: primerz/pixagram
- horizon.safetensors: Custom SDXL checkpoint (~7 GB)
- retroart.safetensors: RetroArt LORA (~50 MB)
- pixelate.safetensors: Pixelate VAE (~200 MB)
Models are automatically downloaded on first use and cached for subsequent runs.
๐ Installation & Setup
Quick Deployment
This Space automatically loads models from the HuggingFace Hub repository primerz/pixagram.
To deploy your own version:
Create a new HuggingFace Space
- Go to https://huggingface.co/new-space
- Choose Gradio SDK
- Select a GPU (T4 or better recommended)
Clone and upload files
git clone https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME cd YOUR_SPACE_NAME # Copy only these files: # - app.py # - requirements.txt # - README.md git add . git commit -m "Initial commit" git pushWait for build
- The Space will automatically download models from primerz/pixagram
- First build may take 10-15 minutes
- Models are cached after first download
Using Your Own Models
If you want to use your own custom models:
- Create a HuggingFace model repository
- Upload your
.safetensorsfiles:horizon.safetensors(SDXL checkpoint)retroart.safetensors(LORA)pixelate.safetensors(VAE)
- Update
MODEL_REPOinapp.pyto your repository name
๐ Usage
Web Interface
Simply upload an image and click "Generate Retro Art"! The model will:
- Detect faces (if any) and preserve facial features
- Analyze depth information from the image
- Apply the retro art style
- Maintain aspect ratio while optimizing resolution
API Usage
The Space exposes a full API. Here's how to use it:
from gradio_client import Client
client = Client("YOUR_USERNAME/YOUR_SPACE_NAME")
result = client.predict(
image="path/to/your/image.jpg",
prompt="retro pixel art game, 16-bit style, vibrant colors",
negative_prompt="blurry, low quality, modern",
steps=30,
guidance_scale=7.5,
controlnet_scale=0.8,
lora_scale=0.85,
api_name="/predict"
)
print(result) # Path to output image
API with cURL
curl -X POST "https://YOUR_USERNAME-YOUR_SPACE_NAME.hf.space/api/predict" \
-H "Content-Type: application/json" \
-d '{
"data": [
"base64_encoded_image_or_url",
"retro pixel art game, 16-bit style",
"blurry, low quality",
30,
7.5,
0.8,
0.85
]
}'
โ๏ธ Parameters
- Prompt: Describe the retro style you want
- Negative Prompt: What to avoid in the generation
- Inference Steps (20-50): More steps = better quality but slower
- Guidance Scale (1-15): How closely to follow the prompt
- ControlNet Scale (0-2): Strength of depth preservation
- LORA Scale (0-2): Strength of RetroArt style application
๐จ Tips for Best Results
- For Portraits: The system automatically detects faces and enhances preservation
- For Scenes: Use prompts like "retro game background, pixel art environment"
- For Characters: Try "16-bit game character, sprite art, detailed"
- Adjust LORA Scale: Lower (0.5-0.7) for subtle effect, higher (0.9-1.2) for strong retro look
๐ Technical Details
- Base Model: SDXL with custom "Horizon" checkpoint from primerz/pixagram
- Model Repository: primerz/pixagram
- Face Detection: Antelopev2 (InsightFace)
- Depth Estimation: DPT-Hybrid-MIDAS
- ControlNet: Zoe Depth SDXL
- VAE: Custom 8x pixelation VAE
- Optimization: xformers, model offloading, VAE slicing
Fallback Behavior
If models cannot be downloaded from the Hub:
- Checkpoint: Falls back to
stabilityai/stable-diffusion-xl-base-1.0 - VAE: Falls back to
madebyollin/sdxl-vae-fp16-fix - LORA: Runs without LORA (style will be less retro)
๐ Troubleshooting
"Model download failed"
- Check internet connectivity in Space settings
- Verify the model repository (primerz/pixagram) is public
- Check Space logs for specific error messages
Out of Memory
- Try reducing image resolution
- Lower inference steps
- Use a larger GPU (A10G or A100)
Slow Generation
- First generation is always slower (model downloading + loading)
- Consider using a faster GPU tier
- Reduce inference steps to 20-25
Models not loading
- Check Space logs for download errors
- Verify HuggingFace Hub access
- Ensure GPU is available
๐ License
MIT License - Feel free to use and modify!
๐ Credits
- SDXL by Stability AI
- ControlNet by Lvmin Zhang
- InsightFace for face analysis
- Diffusers library by HuggingFace
๐ค Contributing
Issues and pull requests are welcome!
Note: This Space requires a GPU. Free tier may experience queuing during high usage.