Deployment
Deploy your FastLaunchAPI application to production using various platforms including VPS with Dokku and cloud platforms like Vercel
Prerequisites
Before deploying, ensure you have a configured FastLaunchAPI application with environment variables properly set, database migrations ready, and all dependencies listed in requirements.txt.
Deployment Options
🖥️ VPS with Dokku
Deploy to your own VPS using Dokku for full control and scalability
☁️ Vercel Serverless
Deploy to Vercel for serverless FastAPI with automatic scaling
🔧 GitHub Actions
Automated deployment with CI/CD pipeline
🐳 Docker Compose
Container-based deployment for consistent environments
Dokku Deployment (VPS)
Dokku is a Docker-powered PaaS that helps you build and manage the lifecycle of applications on your VPS with git-based deployments.
Server Setup
Install Dokku
Install Dokku on your Ubuntu 22.04 VPS. For detailed installation instructions, see the official Dokku installation guide:
# Download and install Dokku
wget -NP . https://dokku.com/bootstrap.sh
sudo DOKKU_TAG=v0.34.8 bash bootstrap.sh
After installation, you can access the Dokku web installer at http://your-server-ip
to complete the setup.
Configure SSH Access
Add your public SSH key to Dokku for secure deployments. Learn more about SSH key management in Dokku:
# Add your public key to dokku
cat ~/.ssh/id_rsa.pub | ssh root@your-server "dokku ssh-keys:add admin"
You can also add SSH keys through the web interface during initial setup.
Create Application
Create your FastLaunchAPI application on the server. See Dokku application management for more details:
# On your server
dokku apps:create your-app-name
Database and Services Setup
PostgreSQL Database
Install and configure PostgreSQL for your application. For advanced PostgreSQL configuration, refer to the Dokku PostgreSQL plugin documentation:
# Install PostgreSQL plugin
sudo dokku plugin:install https://github.com/dokku/dokku-postgres.git
# Create database
dokku postgres:create your-app-db
dokku postgres:link your-app-db your-app-name
The database URL will be automatically added to your application's environment variables.
Redis Cache
Set up Redis for caching and Celery task queue. Learn more about Redis configuration in Dokku:
# Install Redis plugin
sudo dokku plugin:install https://github.com/dokku/dokku-redis.git
# Create Redis instance
dokku redis:create your-app-redis
dokku redis:link your-app-redis your-app-name
Environment Variables
Configure all required environment variables. For comprehensive environment variable management, see Dokku configuration management:
# Set your environment variables
dokku config:set your-app-name SECRET_KEY="your-secret-key"
dokku config:set your-app-name DEBUG="false"
dokku config:set your-app-name CORS_ORIGINS="https://yourdomain.com"
dokku config:set your-app-name SENDGRID_API_KEY="your-sendgrid-key"
dokku config:set your-app-name STRIPE_SECRET_KEY="your-stripe-key"
dokku config:set your-app-name STRIPE_WEBHOOK_SECRET="your-webhook-secret"
dokku config:set your-app-name GROQ_API_KEY="your-groq-key"
Deployment Configuration Files
Your FastLaunchAPI template includes these essential deployment files:
Procfile - Defines process types:
web: uvicorn main:app --host 0.0.0.0 --port 5000
worker: celery -A celery_setup worker --pool=solo --loglevel=info --queues=default
beat: celery -A celery_setup.celery_app beat --loglevel=info
runtime.txt - Specifies Python version:
python-3.12.6
requirements.txt - All your Python dependencies including FastAPI, SQLAlchemy, Celery, and more
Deploy Your Application
# Add Dokku remote
git remote add dokku dokku@your-server:your-app-name
# Deploy to Dokku
git push dokku main:main
The template includes automated deployment. Push to the production
branch to trigger deployment:
# Push to production branch
git checkout production
git merge main
git push origin production
Post-Deployment Setup
Database Migrations
Run your database migrations to set up the database schema. For more about database migrations in Dokku, see running one-off commands:
dokku run your-app-name alembic upgrade head
Scale Workers
Configure Celery workers and beat scheduler for background tasks. Learn more about process scaling in Dokku:
dokku ps:scale your-app-name worker=1 beat=1
You can adjust the number of workers based on your application's needs.
Domain Configuration
Set up your custom domain for production access. For comprehensive domain management, see Dokku domain configuration:
dokku domains:add your-app-name yourdomain.com
SSL Certificate
Enable SSL with Let's Encrypt for secure HTTPS connections. For detailed SSL setup, refer to the Let's Encrypt plugin documentation:
# Install Let's Encrypt plugin
sudo dokku plugin:install https://github.com/dokku/dokku-letsencrypt.git
# Configure SSL
dokku letsencrypt:enable your-app-name
The certificate will automatically renew before expiration.
GitHub Actions Deployment
The template includes a pre-configured GitHub Actions workflow for automated deployments to your Dokku server.
Workflow Configuration
GitHub Actions Workflow:
name: "deploy template.backend"
on:
push:
branches:
- production
jobs:
deploy:
runs-on: ubuntu-22.04
environment: production
steps:
- name: Cloning repo
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up SSH key
run: |
mkdir -p ~/.ssh
echo "${{ secrets.SSH_PRIVATE_KEY }}" > ~/.ssh/id_rsa
chmod 600 ~/.ssh/id_rsa
ssh-keyscan -t rsa your-server-ip >> ~/.ssh/known_hosts
- name: Push to dokku
uses: dokku/github-action@master
with:
git_remote_url: "ssh://dokku@your-server/your-app-name"
ssh_private_key: ${{ secrets.SSH_PRIVATE_KEY }}
branch: "production"
Setup Instructions
Repository Secrets
Configure the following secrets in your GitHub repository settings. Navigate to Settings > Secrets and variables > Actions:
SSH_PRIVATE_KEY
: Your private SSH key for server access
For more details on GitHub Actions secrets, see GitHub's documentation on encrypted secrets.
Update Workflow
Update the workflow file with your server details in .github/workflows/deploy.yaml
:
# Replace with your server IP
ssh-keyscan -t rsa YOUR_SERVER_IP >> ~/.ssh/known_hosts
# Replace with your app name
git_remote_url: "ssh://dokku@YOUR_SERVER/YOUR_APP_NAME"
Deploy
Push to the production branch to trigger automatic deployment:
git checkout production
git merge main
git push origin production
Vercel Deployment
While Vercel is primarily designed for frontend applications, you can deploy FastAPI applications using serverless functions with some limitations.
Configuration Setup
Install Vercel CLI
Install the Vercel CLI globally to manage deployments:
npm i -g vercel
For more CLI options, see the Vercel CLI documentation.
Create Vercel Configuration
Create a vercel.json
file in your project root to configure the deployment:
{
"version": 2,
"builds": [
{
"src": "main.py",
"use": "@vercel/python"
}
],
"routes": [
{
"src": "/(.*)",
"dest": "main.py"
}
]
}
Learn more about Vercel configuration.
Create API Handler
Create an api/index.py
file for Vercel's serverless function structure:
from main import app
# Vercel expects the ASGI app to be named 'app'
app = app
Update Requirements
Ensure your requirements.txt
is compatible with Vercel's Python runtime. Check Vercel's Python runtime documentation for supported packages.
Deploy to Vercel
Set Environment Variables
Configure your environment variables in Vercel. You can also manage these through the Vercel dashboard:
vercel env add DATABASE_URL
vercel env add SECRET_KEY
vercel env add REDIS_URL
vercel env add SENDGRID_API_KEY
vercel env add STRIPE_SECRET_KEY
# Add all other required environment variables
Vercel Limitations
Important Limitations: - No persistent storage - Function execution time limits (10 seconds on hobby, 15 minutes on pro) - No background tasks (Celery workers won't work) - Cold starts may affect performance - Limited to stateless operations
Docker Compose Deployment
Docker Compose provides a complete containerized environment with all services including PostgreSQL, Redis, Celery workers, and monitoring tools.
Container Architecture
Your FastLaunchAPI Docker setup includes the following services:
🐘 PostgreSQL Database
Primary database with pgAdmin for management
🔴 Redis Cache
Cache and message broker for Celery tasks
⚙️ Celery Workers
Background task processing with beat scheduler
🌸 Flower Monitor
Web-based monitoring for Celery tasks
💳 Stripe CLI
Local webhook testing for payments
Prerequisites
Install Docker & Docker Compose
Ensure Docker and Docker Compose are installed on your system:
# Check Docker installation
docker --version
docker-compose --version
# Or using Docker Compose V2
docker compose version
For installation instructions, visit the official Docker documentation.
Project Structure
Your project should have the following structure:
Configuration Files
Docker Compose Configuration
The docker-compose.yml
defines all your application services:
########################################################
# FastLaunchAPI Docker Compose Configuration
# Services: PostgreSQL, Redis, Celery, Flower, Stripe CLI
########################################################
services:
postgres:
image: postgres:15
container_name: template-postgres
ports:
- "5432:5432"
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: template-db
volumes:
- postgres-data:/var/lib/postgresql/data
restart: unless-stopped
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 30s
timeout: 10s
retries: 3
pgadmin:
image: dpage/pgadmin4
container_name: template-pgadmin
depends_on:
- postgres
ports:
- "5555:80"
environment:
PGADMIN_DEFAULT_EMAIL: test@gmail.com
PGADMIN_DEFAULT_PASSWORD: admin
restart: unless-stopped
redis:
image: redis:7-alpine
container_name: template-redis
restart: unless-stopped
ports:
- "6379:6379"
command: redis-server --requirepass yourpassword --save 20 1
volumes:
- redis:/data
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 30s
timeout: 10s
retries: 3
app:
container_name: template-app
build:
context: .
dockerfile: Dockerfile
working_dir: /app/backend/
command: uvicorn main:app --host 0.0.0.0 --port 8000 --reload
ports:
- "8000:8000"
volumes:
- .:/app
depends_on:
- redis
- postgres
environment:
- DATABASE_URL=postgresql://postgres:postgres@postgres:5432/template-db
- REDIS_URL=redis://:yourpassword@redis:6379
- SECRET_KEY=your-secret-key-change-in-production
- DEBUG=true
restart: unless-stopped
celery_worker:
container_name: template-celery-worker
build:
context: .
dockerfile: Dockerfile
working_dir: /app/backend/
command: celery -A celery_setup worker --pool=solo --loglevel=info
volumes:
- .:/app
depends_on:
- redis
- postgres
environment:
- DATABASE_URL=postgresql://postgres:postgres@postgres:5432/template-db
- REDIS_URL=redis://:yourpassword@redis:6379
restart: unless-stopped
celery_beat:
container_name: template-celery-beat
build:
context: .
dockerfile: Dockerfile
working_dir: /app/backend/
command: celery -A celery_setup.celery_app beat --loglevel=info
volumes:
- .:/app
depends_on:
- redis
- postgres
- celery_worker
environment:
- DATABASE_URL=postgresql://postgres:postgres@postgres:5432/template-db
- REDIS_URL=redis://:yourpassword@redis:6379
restart: unless-stopped
flower:
image: mher/flower
container_name: template-flower
ports:
- "5556:5556"
environment:
- CELERY_BROKER_URL=redis://:yourpassword@redis:6379
- FLOWER_PORT=5556
depends_on:
- postgres
- redis
- celery_worker
restart: unless-stopped
stripe-cli:
image: stripe/stripe-cli
container_name: template-stripe-cli
command: listen --api-key YOUR_STRIPE_SECRET_KEY --forward-to app:8000/payments/webhook
depends_on:
- app
restart: unless-stopped
volumes:
postgres-data:
redis:
Dockerfile Configuration
The Dockerfile
builds your FastAPI application:
FROM python:3.12.4-slim
# Set working directory
WORKDIR /app
# Install system dependencies
RUN apt-get update && apt-get install -y \
gcc \
&& rm -rf /var/lib/apt/lists/*
# Copy requirements and install Python dependencies
COPY backend/requirements.txt /app/backend/requirements.txt
RUN pip install --upgrade pip && \
pip install -r /app/backend/requirements.txt
# Copy application code
COPY . /app
# Create non-root user
RUN useradd -m -u 1000 appuser && chown -R appuser:appuser /app
USER appuser
# Expose port
EXPOSE 8000
# Health check
HEALTHCHECK --interval=30s --timeout=30s --start-period=5s --retries=3 \
CMD curl -f http://localhost:8000/health || exit 1
# Default command
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
Deployment Process
Environment Setup
Create a .env
file for environment variables (optional but recommended):
# Database Configuration
POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres
POSTGRES_DB=template-db
# Redis Configuration
REDIS_PASSWORD=yourpassword
# Application Configuration
SECRET_KEY=your-secret-key-change-in-production
DEBUG=true
DATABASE_URL=postgresql://postgres:postgres@postgres:5432/template-db
REDIS_URL=redis://:yourpassword@redis:6379
# External Services
SENDGRID_API_KEY=your-sendgrid-key
STRIPE_SECRET_KEY=your-stripe-secret-key
STRIPE_WEBHOOK_SECRET=your-webhook-secret
GROQ_API_KEY=your-groq-key
Build and Start Services
Build and start all services with Docker Compose:
# Build and start all services
docker-compose up --build -d
# Or using Docker Compose V2
docker compose up --build -d
This will start all services in the background (-d
flag for detached mode).
Run Database Migrations
Execute database migrations once the containers are running:
# Run migrations
docker-compose exec app alembic upgrade head
# Or create initial migration if needed
docker-compose exec app alembic revision --autogenerate -m "Initial migration"
Verify Services
Check that all services are running correctly:
# Check service status
docker-compose ps
# View logs for all services
docker-compose logs
# View logs for specific service
docker-compose logs app
docker-compose logs postgres
docker-compose logs redis
Service Access Points
Once deployed, you can access the following services:
🚀 FastAPI Application
http://localhost:8000 - Main application API
📚 API Documentation
http://localhost:8000/docs - Interactive API docs
🐘 pgAdmin
http://localhost:5555 - Database management interface
🌸 Flower
http://localhost:5556 - Celery task monitoring
Management Commands
# Start services
docker-compose up -d
# Stop services
docker-compose down
# Restart specific service
docker-compose restart app
# View service logs
docker-compose logs -f app
# Run database migrations
docker-compose exec app alembic upgrade head
# Access application shell
docker-compose exec app python
# Access database shell
docker-compose exec postgres psql -U postgres -d template-db
# Access Redis CLI
docker-compose exec redis redis-cli -a yourpassword
# Scale Celery workers
docker-compose up -d --scale celery_worker=3
# View resource usage
docker stats
# Remove unused containers and volumes
docker system prune -a
Production Considerations
For production deployment, make these important changes to your Docker Compose configuration:
Security Configuration
Update security settings for production:
# Remove or change default passwords
environment:
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
REDIS_PASSWORD: ${REDIS_PASSWORD}
SECRET_KEY: ${SECRET_KEY}
DEBUG: false
# Use Docker secrets for sensitive data
secrets:
postgres_password:
file: ./secrets/postgres_password.txt
Performance Optimization
Optimize for production performance:
# Add resource limits
deploy:
resources:
limits:
cpus: "2.0"
memory: 2G
reservations:
cpus: "1.0"
memory: 1G
# Enable production settings
environment:
WORKERS: 4
MAX_CONNECTIONS: 100
External Services
For production, consider using managed services:
# Use external database URL
environment:
DATABASE_URL: ${DATABASE_URL} # Managed PostgreSQL
REDIS_URL: ${REDIS_URL} # Managed Redis
Troubleshooting Docker Issues
Common Docker Issues: - Port conflicts (change exposed ports) - Volume permission issues (check user permissions) - Network connectivity problems - Out of disk space (clean unused containers/images) - Memory constraints (increase Docker memory limits)
# Check specific service logs
docker-compose logs app
docker-compose logs postgres
docker-compose logs redis
# Follow logs in real-time
docker-compose logs -f --tail=100 app
# Check Docker system information
docker system df
docker system events
# Stop and remove all containers
docker-compose down --volumes --rmi all
# Clean up Docker system
docker system prune -a --volumes
# Rebuild from scratch
docker-compose build --no-cache
docker-compose up -d
# Check Docker networks
docker network ls
# Inspect Docker Compose network
docker network inspect $(docker-compose ps -q | head -1 | xargs docker inspect --format='{{range .NetworkSettings.Networks}}{{.NetworkID}}{{end}}')
# Test service connectivity
docker-compose exec app ping postgres
docker-compose exec app ping redis
Docker Compose Benefits
🔄 Consistency
Identical environments across development, staging, and production
🚀 Quick Setup
One command to start your entire application stack
🔧 Easy Scaling
Scale individual services independently
🔒 Isolation
Services run in isolated containers with controlled networking
Environment Variables
Ensure all required environment variables are configured in your deployment platform.
Required Variables
# Database
DATABASE_URL=postgresql://user:password@host:port/database
# Redis
REDIS_URL=redis://host:port/db
# Application
SECRET_KEY=your-secret-key
DEBUG=false
CORS_ORIGINS=https://yourdomain.com
# Email (SendGrid)
SENDGRID_API_KEY=your-sendgrid-api-key
# Stripe Payments
STRIPE_SECRET_KEY=your-stripe-secret-key
STRIPE_WEBHOOK_SECRET=your-webhook-secret
# AI Integration
GROQ_API_KEY=your-groq-api-key
# OAuth
GOOGLE_CLIENT_ID=your-google-client-id
GOOGLE_CLIENT_SECRET=your-google-client-secret
Monitoring and Maintenance
Health Checks
Your application includes built-in health check endpoints for monitoring.
@app.get("/health")
async def health_check():
return {"status": "healthy", "timestamp": datetime.utcnow()}
Logging Configuration
Configure proper logging for production:
import logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
Database Migrations
# Run migrations on Dokku
dokku run your-app-name alembic upgrade head
# Run migrations with Docker Compose
docker-compose exec app alembic upgrade head
# Run migrations manually
alembic upgrade head
Troubleshooting
Common Issues
Database Connection Errors: - Verify DATABASE_URL is correctly set - Ensure database server is accessible - Check firewall settings and network connectivity
Missing Dependencies: - Verify all packages are in requirements.txt - Check for version conflicts - Ensure Python version compatibility
Environment Variable Issues: - Ensure all required variables are set - Check variable names match exactly - Verify sensitive values are not exposed
Debug Commands
# Check application logs
dokku logs your-app-name
# Check configuration
dokku config your-app-name
# Restart application
dokku ps:restart your-app-name
# Check running processes
dokku ps:report your-app-name
# Check deployment logs
docker-compose logs app
# Check environment variables
docker-compose exec app env
# Redeploy
docker-compose up --build -d
# Check deployment logs
vercel logs your-deployment-url
# Check environment variables
vercel env ls
# Redeploy
vercel --prod --force
Security Considerations
Production Security Checklist: - Use HTTPS in production - Set strong SECRET_KEY - Configure CORS properly - Use environment variables for sensitive data - Regular security updates - Monitor application logs - Implement rate limiting - Use database connection pooling
Security Best Practices
HTTPS Configuration
Always use HTTPS in production. For Dokku deployments, Let's Encrypt provides free SSL certificates:
# For Dokku with Let's Encrypt
dokku letsencrypt:enable your-app-name
Learn more about SSL/TLS configuration in Dokku.
Environment Security
Never commit sensitive data to version control. Use environment variables for all sensitive configuration:
# Always use environment variables
SECRET_KEY=your-secret-key
STRIPE_SECRET_KEY=your-stripe-key
Database Security
Use proper database security practices including connection pooling and secure connection strings:
# Use connection pooling
DATABASE_URL=postgresql://user:password@host:port/database?pool_size=20
For production database security, consider using connection pooling with PgBouncer.
Performance Optimization
Performance Tips: - Use Redis for caching - Configure database connection pooling - Implement proper error handling - Use CDN for static assets - Monitor application performance - Enable compression - Optimize database queries
Optimization Strategies
🔄 Redis Caching
Implement Redis for session storage and API response caching
🗄️ Database Optimization
Use connection pooling and optimize SQL queries
📊 Monitoring
Set up application monitoring and performance tracking
⚡ CDN Integration
Use CDN for static assets and media files
This deployment guide provides multiple options for deploying your FastLaunchAPI application. Choose the deployment method that best fits your needs, infrastructure requirements, and scaling goals.