Spaces:
Sleeping
Deployment Guide - Participatory Planning Application
Prerequisites
- Python 3.8+
- 2-4GB RAM (for AI model)
- ~2GB disk space (for model cache)
- Internet connection (first run only)
Option 1: Quick Local Network Demo (5 minutes)
Perfect for: Testing with colleagues on same WiFi network
Steps:
Start the server (already configured):
cd /home/thadillo/MyProjects/participatory_planner source venv/bin/activate python run.pyFind your IP address:
# Linux/Mac ip addr show | grep "inet " | grep -v 127.0.0.1 # Or check the Flask startup message for the IPAccess from other devices:
- Open browser on any device on same WiFi
- Go to:
http://YOUR_IP:5000 - Admin login:
<see-startup-logs-or-set-ADMIN_TOKEN>
Share registration link:
- Give participants:
http://YOUR_IP:5000/generate
- Give participants:
Limitations:
- Only works on local network
- Stops when you close terminal
- Debug mode enabled (slower)
Option 2: Production Server with Gunicorn (Recommended)
Perfect for: Real deployments, VPS/cloud hosting
Steps:
Install Gunicorn:
source venv/bin/activate pip install gunicorn==21.2.0Update environment variables (
.env):# Already set with secure key FLASK_SECRET_KEY=8606a4f67a03c5579a6e73f47195549d446d1e55c9d41d783b36762fc4cd9d75 FLASK_ENV=productionRun with Gunicorn:
gunicorn --config gunicorn_config.py wsgi:appAccess:
http://YOUR_SERVER_IP:8000
Run in background with systemd:
Create /etc/systemd/system/participatory-planner.service:
[Unit]
Description=Participatory Planning Application
After=network.target
[Service]
User=YOUR_USERNAME
WorkingDirectory=/home/thadillo/MyProjects/participatory_planner
Environment="PATH=/home/thadillo/MyProjects/participatory_planner/venv/bin"
ExecStart=/home/thadillo/MyProjects/participatory_planner/venv/bin/gunicorn --config gunicorn_config.py wsgi:app
Restart=always
[Install]
WantedBy=multi-user.target
Then:
sudo systemctl daemon-reload
sudo systemctl enable participatory-planner
sudo systemctl start participatory-planner
sudo systemctl status participatory-planner
Option 3: Docker Deployment (Easiest Production)
Perfect for: Clean deployments, easy updates, cloud platforms
Steps:
Install Docker (if not installed):
curl -fsSL https://get.docker.com -o get-docker.sh sudo sh get-docker.shBuild and run:
cd /home/thadillo/MyProjects/participatory_planner docker-compose up -dAccess:
http://YOUR_SERVER_IP:8000
Docker commands:
# View logs
docker-compose logs -f
# Stop
docker-compose down
# Restart
docker-compose restart
# Update after code changes
docker-compose up -d --build
Data persistence: Database and AI model are stored in volumes (survive restarts)
Option 4: Hugging Face Spaces (Recommended for Public Access)
Perfect for: Public demos, academic projects, community engagement, free hosting
Why Hugging Face Spaces?
- β Free hosting with generous limits (CPU, 16GB RAM, persistent storage)
- β Zero-config HTTPS - automatic SSL certificates
- β Docker support - already configured in this project
- β
Persistent storage -
/datadirectory survives rebuilds - β Public URL - Share with stakeholders instantly
- β Git-based deployment - Push to deploy
- β Model caching - Hugging Face models download fast
Quick Deploy Steps
1. Create Hugging Face Account
- Go to huggingface.co and sign up (free)
- Verify your email
2. Create New Space
- Go to huggingface.co/spaces
- Click "Create new Space"
- Configure:
- Space name:
participatory-planner(or your choice) - License: MIT
- SDK: Docker (important!)
- Visibility: Public or Private
- Space name:
- Click "Create Space"
3. Deploy Your Code
Option A: Direct Git Push (Recommended)
cd /home/thadillo/MyProjects/participatory_planner
# Add Hugging Face remote (replace YOUR_USERNAME)
git remote add hf https://huggingface.co/spaces/YOUR_USERNAME/participatory-planner
# Push to deploy
git push hf main
Option B: Via Web Interface
- In your Space, click "Files" tab
- Upload all project files (drag and drop)
- Commit changes
4. Monitor Build
- Click "Logs" tab to watch Docker build
- First build takes ~5-10 minutes (downloads dependencies)
- Status changes to "Running" when ready
- Your app is live at:
https://huggingface.co/spaces/YOUR_USERNAME/participatory-planner
5. First-Time Setup
- Access your Space URL
- Login with admin token:
<see-startup-logs-or-set-ADMIN_TOKEN>(change this!) - Go to Registration β Create participant tokens
- Share registration link with stakeholders
- First AI analysis downloads BART model (~1.6GB, cached permanently)
Files Already Configured
This project includes everything needed for HF Spaces:
- β Dockerfile - Docker configuration (port 7860, /data persistence)
- β app_hf.py - Flask entry point for HF Spaces
- β requirements.txt - Python dependencies
- β .dockerignore - Excludes local data/models
- β README.md - Displays on Space page
Environment Variables (Optional)
In your Space Settings tab, add:
SECRET_KEY=your-long-random-secret-key-here
FLASK_ENV=production
Generate secure key:
python -c "import secrets; print(secrets.token_hex(32))"
Data Persistence
Hugging Face Spaces provides /data directory:
- β
Database: Stored at
/data/app.db(survives rebuilds) - β
Model cache: Stored at
/data/.cache/huggingface - β
Fine-tuned models: Stored at
/data/models/finetuned
Backup/Restore:
- Use Admin β Session Management
- Export session data as JSON
- Import to restore on any deployment
Training Models on HF Spaces
CPU Training (free tier):
- Head-only training: Works well (<100 examples, 2-5 min)
- LoRA training: Slower on CPU (>100 examples, 10-20 min)
GPU Training (paid tiers):
- Upgrade Space to GPU for faster training
- Or train locally and import model files
Updating Your Deployment
# Make changes locally
git add .
git commit -m "Update: description"
git push hf main
# HF automatically rebuilds and redeploys
# Database and models persist across updates
Troubleshooting HF Spaces
Build fails?
- Check Logs tab for specific error
- Verify Dockerfile syntax
- Ensure all dependencies in requirements.txt
App won't start?
- Port must be 7860 (already configured)
- Check app_hf.py runs Flask on correct port
- Review Python errors in Logs
Database not persisting?
- Verify
/datadirectory created in Dockerfile - Check DATABASE_PATH environment variable
- Ensure permissions (777) on /data
Models not loading?
- First download takes time (~5 min for BART)
- Check HF_HOME environment variable
- Verify cache directory permissions
Out of memory?
- Reduce batch size in training config
- Use smaller model (distilbart-mnli-12-1)
- Consider GPU Space upgrade
Scaling on HF Spaces
Free Tier:
- CPU only
- ~16GB RAM
- ~50GB persistent storage
- Auto-sleep after inactivity (wakes on request)
Paid Tiers (for production):
- GPU access (A10G, A100)
- More RAM and storage
- No auto-sleep
- Custom domains
Security on HF Spaces
Change admin token from
<see-startup-logs-or-set-ADMIN_TOKEN>:# Create new admin token via Flask shell or UISet strong secret key via environment variables
HTTPS automatic - All HF Spaces use SSL by default
Private Spaces - Restrict access to specific users
Monitoring
- Status: Space page shows Running/Building/Error
- Logs: Real-time application logs
- Analytics (public Spaces): View usage statistics
- Database size: Monitor via session export size
Cost Comparison
| Platform | Cost | CPU | RAM | Storage | HTTPS | Setup Time |
|---|---|---|---|---|---|---|
| HF Spaces (Free) | $0 | β | 16GB | 50GB | β | 10 min |
| HF Spaces (GPU) | ~$1/hr | β GPU | 32GB | 100GB | β | 10 min |
| DigitalOcean | $12/mo | β | 2GB | 50GB | β | 30 min |
| AWS EC2 | ~$15/mo | β | 2GB | 20GB | β | 45 min |
| Heroku | $7/mo | β | 512MB | 1GB | β | 20 min |
Winner for demos/academic use: Hugging Face Spaces (Free)
Post-Deployment Checklist
- Space builds successfully
- App accessible via public URL
- Admin login works (token: ADMIN123)
- Changed default admin token
- Participant registration works
- Submission form functional
- AI analysis runs (first time slow, then cached)
- Database persists after rebuild
- Session export/import tested
- README displays on Space page
- Shared URL with stakeholders
Example Deployment
Live Example: See participatory-planner (replace with your Space)
Option 5: Other Cloud Platforms
A) DigitalOcean App Platform
- Push code to GitHub/GitLab
- Create new App on DigitalOcean
- Connect repository
- Configure:
- Run Command:
gunicorn --config gunicorn_config.py wsgi:app - Environment: Set
FLASK_SECRET_KEY - Resources: 2GB RAM minimum
- Run Command:
- Deploy!
B) Heroku
Create Procfile:
web: gunicorn --config gunicorn_config.py wsgi:app
Deploy:
heroku create participatory-planner
heroku config:set FLASK_SECRET_KEY=8606a4f67a03c5579a6e73f47195549d446d1e55c9d41d783b36762fc4cd9d75
git push heroku main
C) AWS EC2
- Launch Ubuntu instance (t3.medium or larger)
- SSH into server
- Clone repository
- Follow "Option 2: Gunicorn" steps above
- Configure security group: Allow port 8000
D) Google Cloud Run (Serverless)
gcloud run deploy participatory-planner \
--source . \
--platform managed \
--region us-central1 \
--allow-unauthenticated \
--memory 2Gi
Adding HTTPS/SSL (Production Requirement)
Option A: Nginx Reverse Proxy
Install Nginx:
sudo apt install nginx certbot python3-certbot-nginxConfigure Nginx (
/etc/nginx/sites-available/participatory-planner):server { listen 80; server_name your-domain.com; location / { proxy_pass http://127.0.0.1:8000; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } }Enable and get SSL:
sudo ln -s /etc/nginx/sites-available/participatory-planner /etc/nginx/sites-enabled/ sudo nginx -t sudo systemctl reload nginx sudo certbot --nginx -d your-domain.com
Option B: Cloudflare Tunnel (Free HTTPS, no open ports)
- Install:
cloudflared tunnel install - Login:
cloudflared tunnel login - Create tunnel:
cloudflared tunnel create participatory-planner - Route:
cloudflared tunnel route dns participatory-planner your-domain.com - Run:
cloudflared tunnel --url http://localhost:8000 run participatory-planner
Performance Optimization
For Large Sessions (100+ participants):
Increase Gunicorn workers (in
gunicorn_config.py):workers = 4 # Or more based on CPU coresAdd Redis caching:
pip install Flask-Caching redisMove AI analysis to background (Celery):
pip install celery redis
Monitoring & Maintenance
View Application Logs:
# Gunicorn (stdout)
tail -f /var/log/participatory-planner.log
# Docker
docker-compose logs -f
# Systemd
sudo journalctl -u participatory-planner -f
Backup Data:
# Export via admin UI (recommended)
# Or copy database file
cp instance/app.db backups/app-$(date +%Y%m%d).db
Update Application:
# Pull latest code
git pull
# Install dependencies
source venv/bin/activate
pip install -r requirements.txt
# Restart
sudo systemctl restart participatory-planner # systemd
# OR
docker-compose up -d --build # Docker
Troubleshooting
Issue: AI model download fails
Solution: Ensure 2GB+ free disk space and internet connectivity
Issue: Port already in use
Solution: Change port in gunicorn_config.py or run.py
Issue: Workers timing out during analysis
Solution: Increase timeout in gunicorn_config.py:
timeout = 300 # 5 minutes
Issue: Out of memory
Solution: Reduce Gunicorn workers or upgrade RAM (need 2GB minimum)
Security Checklist
- Secret key changed from default
- Debug mode OFF in production (
FLASK_ENV=production) - HTTPS enabled (SSL certificate)
- Firewall configured (only ports 80, 443, 22 open)
- Regular backups scheduled
- Strong admin token (change from ADMIN123)
- Rate limiting added (optional, use Flask-Limiter)
Quick Reference
| Method | Best For | URL | Setup Time |
|---|---|---|---|
| Local Network | Testing/demo | http://LOCAL_IP:5000 | 1 min |
| Gunicorn | VPS/dedicated server | http://SERVER_IP:8000 | 10 min |
| Docker | Clean deployment | http://SERVER_IP:8000 | 5 min |
| Cloud Platform | Managed hosting | https://your-app.platform.com | 15 min |
Default Admin Token: <see-startup-logs-or-set-ADMIN_TOKEN> (β οΈ CHANGE IN PRODUCTION)
Support: Check logs first, then review error messages in browser console (F12)