Spaces:
Running
Running
| import os | |
| from dotenv import load_dotenv | |
| import google.generativeai as genai | |
| load_dotenv() | |
| def generate_flux_optimized(): | |
| api_key = os.getenv('GOOGLE_API_KEY') | |
| genai.configure(api_key=api_key) | |
| model = genai.GenerativeModel('gemini-2.5-flash-preview-05-20') | |
| prompt = """ | |
| Generate optimized Python code for running FLUX.1-schnell diffusion model on Apple Silicon (MPS) hardware. | |
| Requirements: | |
| - Use FluxPipeline from diffusers library | |
| - Model: "black-forest-labs/FLUX.1-schnell" | |
| - Target device: MPS (Apple Silicon) | |
| - Image size: 768x1360 | |
| - Inference steps: 4 | |
| - Prompt: "A cat holding a sign that says hello world" | |
| Apply these Apple Silicon optimizations: | |
| 1. Use torch.bfloat16 (better than float16 for MPS) | |
| 2. Enable attention slicing and VAE slicing for memory efficiency | |
| 3. Use guidance_scale=0.0 for FLUX.1-schnell | |
| 4. Add max_sequence_length=256 for memory optimization | |
| 5. Include proper error handling | |
| 6. Add torch.inference_mode() for speed | |
| Generate ONLY Python code without markdown formatting. | |
| """ | |
| try: | |
| response = model.generate_content(prompt) | |
| code = response.text.strip() | |
| # Clean up any markdown formatting | |
| if code.startswith('```python'): | |
| code = code[9:] | |
| if code.endswith('```'): | |
| code = code[:-3] | |
| print("FLUX-Optimized Code for Apple Silicon:") | |
| print("=" * 50) | |
| print(code) | |
| print("=" * 50) | |
| except Exception as e: | |
| print(f"Error: {e}") | |
| if __name__ == "__main__": | |
| generate_flux_optimized() |