participatory-planner / DEPLOYMENT.md
thadillo
Security hardening and HuggingFace deployment fixes
d038974

Deployment Guide - Participatory Planning Application

Prerequisites

  • Python 3.8+
  • 2-4GB RAM (for AI model)
  • ~2GB disk space (for model cache)
  • Internet connection (first run only)

Option 1: Quick Local Network Demo (5 minutes)

Perfect for: Testing with colleagues on same WiFi network

Steps:

  1. Start the server (already configured):

    cd /home/thadillo/MyProjects/participatory_planner
    source venv/bin/activate
    python run.py
    
  2. Find your IP address:

    # Linux/Mac
    ip addr show | grep "inet " | grep -v 127.0.0.1
    
    # Or check the Flask startup message for the IP
    
  3. Access from other devices:

    • Open browser on any device on same WiFi
    • Go to: http://YOUR_IP:5000
    • Admin login: <see-startup-logs-or-set-ADMIN_TOKEN>
  4. Share registration link:

    • Give participants: http://YOUR_IP:5000/generate

Limitations:

  • Only works on local network
  • Stops when you close terminal
  • Debug mode enabled (slower)

Option 2: Production Server with Gunicorn (Recommended)

Perfect for: Real deployments, VPS/cloud hosting

Steps:

  1. Install Gunicorn:

    source venv/bin/activate
    pip install gunicorn==21.2.0
    
  2. Update environment variables (.env):

    # Already set with secure key
    FLASK_SECRET_KEY=8606a4f67a03c5579a6e73f47195549d446d1e55c9d41d783b36762fc4cd9d75
    FLASK_ENV=production
    
  3. Run with Gunicorn:

    gunicorn --config gunicorn_config.py wsgi:app
    
  4. Access: http://YOUR_SERVER_IP:8000

Run in background with systemd:

Create /etc/systemd/system/participatory-planner.service:

[Unit]
Description=Participatory Planning Application
After=network.target

[Service]
User=YOUR_USERNAME
WorkingDirectory=/home/thadillo/MyProjects/participatory_planner
Environment="PATH=/home/thadillo/MyProjects/participatory_planner/venv/bin"
ExecStart=/home/thadillo/MyProjects/participatory_planner/venv/bin/gunicorn --config gunicorn_config.py wsgi:app
Restart=always

[Install]
WantedBy=multi-user.target

Then:

sudo systemctl daemon-reload
sudo systemctl enable participatory-planner
sudo systemctl start participatory-planner
sudo systemctl status participatory-planner

Option 3: Docker Deployment (Easiest Production)

Perfect for: Clean deployments, easy updates, cloud platforms

Steps:

  1. Install Docker (if not installed):

    curl -fsSL https://get.docker.com -o get-docker.sh
    sudo sh get-docker.sh
    
  2. Build and run:

    cd /home/thadillo/MyProjects/participatory_planner
    docker-compose up -d
    
  3. Access: http://YOUR_SERVER_IP:8000

Docker commands:

# View logs
docker-compose logs -f

# Stop
docker-compose down

# Restart
docker-compose restart

# Update after code changes
docker-compose up -d --build

Data persistence: Database and AI model are stored in volumes (survive restarts)


Option 4: Hugging Face Spaces (Recommended for Public Access)

Perfect for: Public demos, academic projects, community engagement, free hosting

Why Hugging Face Spaces?

  • βœ… Free hosting with generous limits (CPU, 16GB RAM, persistent storage)
  • βœ… Zero-config HTTPS - automatic SSL certificates
  • βœ… Docker support - already configured in this project
  • βœ… Persistent storage - /data directory survives rebuilds
  • βœ… Public URL - Share with stakeholders instantly
  • βœ… Git-based deployment - Push to deploy
  • βœ… Model caching - Hugging Face models download fast

Quick Deploy Steps

1. Create Hugging Face Account

2. Create New Space

  1. Go to huggingface.co/spaces
  2. Click "Create new Space"
  3. Configure:
    • Space name: participatory-planner (or your choice)
    • License: MIT
    • SDK: Docker (important!)
    • Visibility: Public or Private
  4. Click "Create Space"

3. Deploy Your Code

Option A: Direct Git Push (Recommended)

cd /home/thadillo/MyProjects/participatory_planner

# Add Hugging Face remote (replace YOUR_USERNAME)
git remote add hf https://huggingface.co/spaces/YOUR_USERNAME/participatory-planner

# Push to deploy
git push hf main

Option B: Via Web Interface

  1. In your Space, click "Files" tab
  2. Upload all project files (drag and drop)
  3. Commit changes

4. Monitor Build

  • Click "Logs" tab to watch Docker build
  • First build takes ~5-10 minutes (downloads dependencies)
  • Status changes to "Running" when ready
  • Your app is live at: https://huggingface.co/spaces/YOUR_USERNAME/participatory-planner

5. First-Time Setup

  1. Access your Space URL
  2. Login with admin token: <see-startup-logs-or-set-ADMIN_TOKEN> (change this!)
  3. Go to Registration β†’ Create participant tokens
  4. Share registration link with stakeholders
  5. First AI analysis downloads BART model (~1.6GB, cached permanently)

Files Already Configured

This project includes everything needed for HF Spaces:

  • βœ… Dockerfile - Docker configuration (port 7860, /data persistence)
  • βœ… app_hf.py - Flask entry point for HF Spaces
  • βœ… requirements.txt - Python dependencies
  • βœ… .dockerignore - Excludes local data/models
  • βœ… README.md - Displays on Space page

Environment Variables (Optional)

In your Space Settings tab, add:

SECRET_KEY=your-long-random-secret-key-here
FLASK_ENV=production

Generate secure key:

python -c "import secrets; print(secrets.token_hex(32))"

Data Persistence

Hugging Face Spaces provides /data directory:

  • βœ… Database: Stored at /data/app.db (survives rebuilds)
  • βœ… Model cache: Stored at /data/.cache/huggingface
  • βœ… Fine-tuned models: Stored at /data/models/finetuned

Backup/Restore:

  1. Use Admin β†’ Session Management
  2. Export session data as JSON
  3. Import to restore on any deployment

Training Models on HF Spaces

CPU Training (free tier):

  • Head-only training: Works well (<100 examples, 2-5 min)
  • LoRA training: Slower on CPU (>100 examples, 10-20 min)

GPU Training (paid tiers):

  • Upgrade Space to GPU for faster training
  • Or train locally and import model files

Updating Your Deployment

# Make changes locally
git add .
git commit -m "Update: description"
git push hf main

# HF automatically rebuilds and redeploys
# Database and models persist across updates

Troubleshooting HF Spaces

Build fails?

  • Check Logs tab for specific error
  • Verify Dockerfile syntax
  • Ensure all dependencies in requirements.txt

App won't start?

  • Port must be 7860 (already configured)
  • Check app_hf.py runs Flask on correct port
  • Review Python errors in Logs

Database not persisting?

  • Verify /data directory created in Dockerfile
  • Check DATABASE_PATH environment variable
  • Ensure permissions (777) on /data

Models not loading?

  • First download takes time (~5 min for BART)
  • Check HF_HOME environment variable
  • Verify cache directory permissions

Out of memory?

  • Reduce batch size in training config
  • Use smaller model (distilbart-mnli-12-1)
  • Consider GPU Space upgrade

Scaling on HF Spaces

Free Tier:

  • CPU only
  • ~16GB RAM
  • ~50GB persistent storage
  • Auto-sleep after inactivity (wakes on request)

Paid Tiers (for production):

  • GPU access (A10G, A100)
  • More RAM and storage
  • No auto-sleep
  • Custom domains

Security on HF Spaces

  1. Change admin token from <see-startup-logs-or-set-ADMIN_TOKEN>:

    # Create new admin token via Flask shell or UI
    
  2. Set strong secret key via environment variables

  3. HTTPS automatic - All HF Spaces use SSL by default

  4. Private Spaces - Restrict access to specific users

Monitoring

  • Status: Space page shows Running/Building/Error
  • Logs: Real-time application logs
  • Analytics (public Spaces): View usage statistics
  • Database size: Monitor via session export size

Cost Comparison

Platform Cost CPU RAM Storage HTTPS Setup Time
HF Spaces (Free) $0 βœ… 16GB 50GB βœ… 10 min
HF Spaces (GPU) ~$1/hr βœ… GPU 32GB 100GB βœ… 10 min
DigitalOcean $12/mo βœ… 2GB 50GB ❌ 30 min
AWS EC2 ~$15/mo βœ… 2GB 20GB ❌ 45 min
Heroku $7/mo βœ… 512MB 1GB βœ… 20 min

Winner for demos/academic use: Hugging Face Spaces (Free)

Post-Deployment Checklist

  • Space builds successfully
  • App accessible via public URL
  • Admin login works (token: ADMIN123)
  • Changed default admin token
  • Participant registration works
  • Submission form functional
  • AI analysis runs (first time slow, then cached)
  • Database persists after rebuild
  • Session export/import tested
  • README displays on Space page
  • Shared URL with stakeholders

Example Deployment

Live Example: See participatory-planner (replace with your Space)


Option 5: Other Cloud Platforms

A) DigitalOcean App Platform

  1. Push code to GitHub/GitLab
  2. Create new App on DigitalOcean
  3. Connect repository
  4. Configure:
    • Run Command: gunicorn --config gunicorn_config.py wsgi:app
    • Environment: Set FLASK_SECRET_KEY
    • Resources: 2GB RAM minimum
  5. Deploy!

B) Heroku

Create Procfile:

web: gunicorn --config gunicorn_config.py wsgi:app

Deploy:

heroku create participatory-planner
heroku config:set FLASK_SECRET_KEY=8606a4f67a03c5579a6e73f47195549d446d1e55c9d41d783b36762fc4cd9d75
git push heroku main

C) AWS EC2

  1. Launch Ubuntu instance (t3.medium or larger)
  2. SSH into server
  3. Clone repository
  4. Follow "Option 2: Gunicorn" steps above
  5. Configure security group: Allow port 8000

D) Google Cloud Run (Serverless)

gcloud run deploy participatory-planner \
  --source . \
  --platform managed \
  --region us-central1 \
  --allow-unauthenticated \
  --memory 2Gi

Adding HTTPS/SSL (Production Requirement)

Option A: Nginx Reverse Proxy

  1. Install Nginx:

    sudo apt install nginx certbot python3-certbot-nginx
    
  2. Configure Nginx (/etc/nginx/sites-available/participatory-planner):

    server {
        listen 80;
        server_name your-domain.com;
    
        location / {
            proxy_pass http://127.0.0.1:8000;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
        }
    }
    
  3. Enable and get SSL:

    sudo ln -s /etc/nginx/sites-available/participatory-planner /etc/nginx/sites-enabled/
    sudo nginx -t
    sudo systemctl reload nginx
    sudo certbot --nginx -d your-domain.com
    

Option B: Cloudflare Tunnel (Free HTTPS, no open ports)

  1. Install: cloudflared tunnel install
  2. Login: cloudflared tunnel login
  3. Create tunnel: cloudflared tunnel create participatory-planner
  4. Route: cloudflared tunnel route dns participatory-planner your-domain.com
  5. Run: cloudflared tunnel --url http://localhost:8000 run participatory-planner

Performance Optimization

For Large Sessions (100+ participants):

  1. Increase Gunicorn workers (in gunicorn_config.py):

    workers = 4  # Or more based on CPU cores
    
  2. Add Redis caching:

    pip install Flask-Caching redis
    
  3. Move AI analysis to background (Celery):

    pip install celery redis
    

Monitoring & Maintenance

View Application Logs:

# Gunicorn (stdout)
tail -f /var/log/participatory-planner.log

# Docker
docker-compose logs -f

# Systemd
sudo journalctl -u participatory-planner -f

Backup Data:

# Export via admin UI (recommended)
# Or copy database file
cp instance/app.db backups/app-$(date +%Y%m%d).db

Update Application:

# Pull latest code
git pull

# Install dependencies
source venv/bin/activate
pip install -r requirements.txt

# Restart
sudo systemctl restart participatory-planner  # systemd
# OR
docker-compose up -d --build  # Docker

Troubleshooting

Issue: AI model download fails

Solution: Ensure 2GB+ free disk space and internet connectivity

Issue: Port already in use

Solution: Change port in gunicorn_config.py or run.py

Issue: Workers timing out during analysis

Solution: Increase timeout in gunicorn_config.py:

timeout = 300  # 5 minutes

Issue: Out of memory

Solution: Reduce Gunicorn workers or upgrade RAM (need 2GB minimum)


Security Checklist

  • Secret key changed from default
  • Debug mode OFF in production (FLASK_ENV=production)
  • HTTPS enabled (SSL certificate)
  • Firewall configured (only ports 80, 443, 22 open)
  • Regular backups scheduled
  • Strong admin token (change from ADMIN123)
  • Rate limiting added (optional, use Flask-Limiter)

Quick Reference

Method Best For URL Setup Time
Local Network Testing/demo http://LOCAL_IP:5000 1 min
Gunicorn VPS/dedicated server http://SERVER_IP:8000 10 min
Docker Clean deployment http://SERVER_IP:8000 5 min
Cloud Platform Managed hosting https://your-app.platform.com 15 min

Default Admin Token: <see-startup-logs-or-set-ADMIN_TOKEN> (⚠️ CHANGE IN PRODUCTION)

Support: Check logs first, then review error messages in browser console (F12)