Database Management
Learn how to work with SQLAlchemy models, run migrations, and extend the database schema in FastLaunchAPI
Overview
The database architecture in FastLaunchAPI follows a modular, feature-driven approach where each component of your application manages its own database models. This design promotes maintainability, scalability, and makes it easy to add new features without disrupting existing functionality.
Key Features
SQLAlchemy ORM
Modern Python ORM with full async support and type hints
Alembic Migrations
Automatic schema migrations with version control
Modular Design
Models organized by feature for better maintainability
Connection Pooling
Optimized database connections for production use
Type Safety
Full type annotations with FastAPI dependency injection
Architecture Structure
backend/
├── app/
│ ├── db/
│ │ ├── database.py # Core database configuration
│ │ └── __init__.py
│ └── routers/
│ ├── auth/
│ │ ├── models.py # User authentication models
│ │ └── ...
│ ├── payments/
│ │ ├── models.py # Payment-related models
│ │ └── ...
│ └── your_feature/
│ ├── models.py # Your custom models
│ └── ...
├── alembic/
│ ├── env.py # Alembic configuration
│ ├── versions/ # Migration files
│ └── alembic.ini # Alembic settings
└── docker-compose.yml # Database container setup
Database Configuration
The core database configuration is centralized in app/db/database.py
. Let's break down each component:
import os
from dotenv import load_dotenv
from fastapi import Depends
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.declarative import declarative_base
from typing import Annotated
from sqlalchemy.orm import Session
# Load environment variables from .env file
load_dotenv()
# Get database URL from environment
SQLALCHEMY_DATABASE_URL = os.getenv("DATABASE_URL")
# Create SQLAlchemy engine
engine = create_engine(SQLALCHEMY_DATABASE_URL)
# Create SessionLocal class for database sessions
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
# Create Base class for declarative models
Base = declarative_base()
def get_db():
"""
Database dependency function that provides database sessions.
Automatically handles session creation and cleanup.
"""
db = SessionLocal()
try:
yield db
finally:
db.close()
# Type annotation for FastAPI dependency injection
db_dependency = Annotated[Session, Depends(get_db)]
Component Explanations
load_dotenv()
: Loads environment variables from your.env
file, making database credentials available to your applicationSQLALCHEMY_DATABASE_URL
: Database connection string retrieved from environment variables for securityengine
: SQLAlchemy engine that manages database connections and connection poolingSessionLocal
: Session factory that creates database sessions with specific configurations:autocommit=False
: Transactions must be explicitly committedautoflush=False
: Changes aren't automatically flushed to the databasebind=engine
: Links sessions to our database engine
Base
: Declarative base class that all your models will inherit fromget_db()
: Dependency function that provides database sessions to FastAPI routes with automatic cleanupdb_dependency
: Type annotation that combines Session type with dependency injection for clean route definitions
Setting Up Your Database
Configure Environment Variables
Create a .env
file in your project root with your database connection details:
# PostgreSQL (recommended for production)
DATABASE_URL=postgresql://username:password@localhost:5432/your_database
# SQLite (good for development and testing)
DATABASE_URL=sqlite:///./app.db
# MySQL (if you prefer MySQL)
DATABASE_URL=mysql://username:password@localhost:3306/your_database
Database URL Format: The URL follows the format
dialect://username:password@host:port/database_name
. For SQLite, the format
is sqlite:///path/to/database.db
.
Start Database Server (Docker)
If you're using the included Docker setup, start your PostgreSQL database:
# Start PostgreSQL container
docker-compose up -d postgres
# Verify the container is running
docker ps
The Docker configuration in docker-compose.yml
sets up a PostgreSQL instance:
services:
postgres:
image: postgres # Official PostgreSQL image
container_name: template-postgres # Custom container name
ports:
- 5432:5432 # Map port 5432 to host
environment:
POSTGRES_USER: postgres # Database username
POSTGRES_PASSWORD: postgres # Database password
POSTGRES_DB: template-db # Initial database name
volumes:
- postgres-data:/var/lib/postgresql/data # Persist data
restart: unless-stopped # Auto-restart policy
Initialize Database Tables
The database tables are automatically created when you run your FastAPI application. In main.py
, you'll see:
from app.routers.auth import models as auth_models
from app.routers.payments import models as payments_models
# Create all database tables
auth_models.Base.metadata.create_all(bind=engine)
payments_models.Base.metadata.create_all(bind=engine)
This ensures all your model tables exist when the application starts.
Working with Models
Understanding the User Model
Let's examine the built-in User model to understand the patterns and conventions used:
from sqlalchemy.orm import relationship
from ...db.database import Base
from sqlalchemy import Column, Integer, String, Boolean
class User(Base):
"""User model for authentication"""
__tablename__ = "users" # Explicit table name (plural convention)
# Primary key with auto-increment
id = Column(Integer, primary_key=True, index=True, autoincrement=True)
# Unique user identifiers
username = Column(String, unique=True)
email = Column(String, unique=True)
# Password storage (always hashed, never plain text)
hashed_password = Column(String)
# Email verification status
is_verified = Column(Boolean, default=False)
# OAuth integration fields
is_oauth = Column(Boolean, default=False)
google_sub = Column(String(100), nullable=True, unique=True, index=True)
# Email verification and password reset tokens
verification_token = Column(String, nullable=True)
reset_token = Column(String, nullable=True)
# Stripe payment integration
customer_id = Column(String(255), nullable=True)
plan_id = Column(Integer, nullable=True)
subscription_id = Column(String(255), nullable=True)
subscription_status = Column(String(64), nullable=True)
subscription_last_renew = Column(String, unique=False, nullable=True)
subscription_next_renew = Column(String, unique=False, nullable=True)
Model Best Practices Explained
__tablename__
: Explicitly define table names using plural nouns (users, products, orders)- Primary Keys: Always use
id
as the primary key with auto-increment - Indexes: Add
index=True
to frequently queried columns for performance - Unique Constraints: Use
unique=True
for fields that must be unique across all records - Nullable Fields: Use
nullable=True
for optional fields,nullable=False
(default) for required fields - Default Values: Set sensible defaults with
default=value
- String Lengths: Specify maximum lengths for VARCHAR columns:
String(100)
Creating New Models
Create the Model File
Create a new model in your feature directory, for example app/routers/blog/models.py
:
from sqlalchemy import Column, Integer, String, Boolean, DateTime, ForeignKey, Text
from sqlalchemy.orm import relationship
from datetime import datetime
from app.db.database import Base
class BlogPost(Base):
"""Blog post model for content management"""
__tablename__ = "blog_posts"
# Primary key
id = Column(Integer, primary_key=True, index=True, autoincrement=True)
# Content fields
title = Column(String(200), nullable=False, index=True)
slug = Column(String(200), nullable=False, unique=True, index=True)
content = Column(Text, nullable=False)
excerpt = Column(String(500), nullable=True)
# Status and visibility
is_published = Column(Boolean, default=False)
is_featured = Column(Boolean, default=False)
# Timestamps
created_at = Column(DateTime, default=datetime.utcnow, nullable=False)
updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow)
published_at = Column(DateTime, nullable=True)
# Foreign key to User model
author_id = Column(Integer, ForeignKey("users.id"), nullable=False)
# Relationship to User model
author = relationship("User", back_populates="blog_posts")
class BlogCategory(Base):
"""Blog category model for organizing posts"""
__tablename__ = "blog_categories"
id = Column(Integer, primary_key=True, index=True, autoincrement=True)
name = Column(String(100), nullable=False, unique=True)
slug = Column(String(100), nullable=False, unique=True, index=True)
description = Column(Text, nullable=True)
is_active = Column(Boolean, default=True)
# Timestamps
created_at = Column(DateTime, default=datetime.utcnow)
updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow)
Key Points Explained:
Text
: Use for large text content (blog posts, descriptions)DateTime
: Include timestamps for audit trails and sortingForeignKey
: Links to other tables using"table_name.column_name"
formatrelationship
: Defines SQLAlchemy relationships for easy data accessonupdate
: Automatically updates timestamp when record is modified
Update the User Model (if needed)
If you want bidirectional relationships, update the User model:
# In app/routers/auth/models.py
class User(Base):
# ... existing fields ...
# Add relationship back to blog posts
blog_posts = relationship("BlogPost", back_populates="author")
This creates a bidirectional relationship where you can access user.blog_posts
and blog_post.author
.
Register Models in Main Application
Update your main.py
to include the new models:
from app.routers.auth import models as auth_models
from app.routers.payments import models as payments_models
from app.routers.blog import models as blog_models # Add this import
# Create database tables for all models
auth_models.Base.metadata.create_all(bind=engine)
payments_models.Base.metadata.create_all(bind=engine)
blog_models.Base.metadata.create_all(bind=engine) # Add this line
Update Alembic Configuration
Modify alembic/env.py
to include your new models in migrations:
# Import all model modules
from app.routers.auth import models as auth_models
from app.routers.payments import models as payments_models
from app.routers.blog import models as blog_models # Add this import
# This can be left as it
target_metadata = [
auth_models.Base.metadata,
]
Important: You must add every new model module to both main.py
and
alembic/env.py
for proper table creation and migration support.
Database Migrations with Alembic
Alembic handles database schema changes through versioned migrations. Here's how the system works and how to use it effectively.
Understanding Alembic Configuration
The alembic/env.py
file configures how Alembic discovers and manages your models:
import os
from logging.config import fileConfig
from sqlalchemy import engine_from_config
from sqlalchemy import pool
from alembic import context
# Alembic configuration object
config = context.config
# Setup logging from alembic.ini
if config.config_file_name is not None:
fileConfig(config.config_file_name)
# Import all your model modules
from app.routers.auth import models as auth_models
from app.routers.payments import models as payments_models
# This tells Alembic about all your models
target_metadata = [auth_models.Base.metadata]
def run_migrations_offline() -> None:
"""Run migrations in 'offline' mode - generates SQL without database connection"""
url = config.get_main_option("sqlalchemy.url")
context.configure(
url=url,
target_metadata=target_metadata,
literal_binds=True,
dialect_opts={"paramstyle": "named"},
)
with context.begin_transaction():
context.run_migrations()
def run_migrations_online():
"""Run migrations in 'online' mode - executes against live database"""
# Get database URL from environment
db_url = os.getenv('DATABASE_URL')
if db_url:
config.set_main_option("sqlalchemy.url", db_url)
# Create engine with connection pooling
connectable = engine_from_config(
config.get_section(config.config_ini_section),
prefix="sqlalchemy.",
poolclass=pool.NullPool
)
with connectable.connect() as connection:
context.configure(
connection=connection,
target_metadata=target_metadata
)
with context.begin_transaction():
context.run_migrations()
# Run appropriate migration mode
if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()
Creating and Running Migrations
Generate Migration from Model Changes
After creating or modifying models, generate a migration file:
# Auto-generate migration with descriptive message
alembic revision --autogenerate -m "Add blog post and category models"
# For specific changes
alembic revision --autogenerate -m "Add email verification to users"
alembic revision --autogenerate -m "Add indexes to blog posts"
What --autogenerate
does:
- Compares your current models with the database schema
- Generates the necessary SQL to sync the database with your models
- Creates a new migration file with
upgrade()
anddowngrade()
functions
Review Generated Migration
Always review the generated migration file in alembic/versions/
before applying. Here's an example:
"""Add blog post and category models
Revision ID: abc123def456
Revises: previous_revision
Create Date: 2024-01-15 10:30:00.000000
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic
revision = 'abc123def456'
down_revision = 'previous_revision'
branch_labels = None
depends_on = None
def upgrade():
# Create blog_categories table
op.create_table('blog_categories',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('name', sa.String(length=100), nullable=False),
sa.Column('slug', sa.String(length=100), nullable=False),
sa.Column('description', sa.Text(), nullable=True),
sa.Column('is_active', sa.Boolean(), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.PrimaryKeyConstraint('id'),
sa.UniqueConstraint('name'),
sa.UniqueConstraint('slug')
)
# Create indexes
op.create_index('ix_blog_categories_id', 'blog_categories', ['id'])
op.create_index('ix_blog_categories_slug', 'blog_categories', ['slug'])
# Create blog_posts table
op.create_table('blog_posts',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('title', sa.String(length=200), nullable=False),
sa.Column('slug', sa.String(length=200), nullable=False),
sa.Column('content', sa.Text(), nullable=False),
sa.Column('excerpt', sa.String(length=500), nullable=True),
sa.Column('is_published', sa.Boolean(), nullable=True),
sa.Column('is_featured', sa.Boolean(), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=False),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.Column('published_at', sa.DateTime(), nullable=True),
sa.Column('author_id', sa.Integer(), nullable=False),
sa.ForeignKeyConstraint(['author_id'], ['users.id'], ),
sa.PrimaryKeyConstraint('id'),
sa.UniqueConstraint('slug')
)
# Create indexes
op.create_index('ix_blog_posts_id', 'blog_posts', ['id'])
op.create_index('ix_blog_posts_slug', 'blog_posts', ['slug'])
op.create_index('ix_blog_posts_title', 'blog_posts', ['title'])
def downgrade():
# Drop tables in reverse order (due to foreign key constraints)
op.drop_index('ix_blog_posts_title', table_name='blog_posts')
op.drop_index('ix_blog_posts_slug', table_name='blog_posts')
op.drop_index('ix_blog_posts_id', table_name='blog_posts')
op.drop_table('blog_posts')
op.drop_index('ix_blog_categories_slug', table_name='blog_categories')
op.drop_index('ix_blog_categories_id', table_name='blog_categories')
op.drop_table('blog_categories')
Review Checklist:
- Are all table names correct?
- Are column types appropriate for your data?
- Are indexes created for frequently queried columns?
- Are foreign key constraints properly defined?
- Does the downgrade function properly reverse all changes?
Apply Migration to Database
# Apply all pending migrations
alembic upgrade head
# Apply to specific revision
alembic upgrade abc123def456
# Apply one migration at a time
alembic upgrade +1
Migration Commands Explained:
head
: Latest migration (most common)revision_id
: Specific migration by ID+1
: One migration forward from current-1
: One migration backward from current
Verify Migration Success
Check that your migration was applied successfully:
# Show current migration status
alembic current
# Show migration history
alembic history --verbose
# Show pending migrations
alembic show head
Advanced Migration Operations
Manual Migration Creation
Sometimes you need to create migrations manually for complex changes:
# Create empty migration file
alembic revision -m "Add custom indexes and constraints"
Then edit the generated file to add your custom changes:
def upgrade():
# Add custom index
op.create_index(
'ix_blog_posts_published_date',
'blog_posts',
['published_at', 'is_published'],
postgresql_where=sa.text('is_published = true')
)
# Add check constraint
op.create_check_constraint(
'ck_blog_posts_published_date',
'blog_posts',
'published_at IS NULL OR published_at <= NOW()'
)
def downgrade():
op.drop_constraint('ck_blog_posts_published_date', 'blog_posts')
op.drop_index('ix_blog_posts_published_date', table_name='blog_posts')
Rolling Back Migrations
# Rollback to previous migration
alembic downgrade -1
# Rollback to specific revision
alembic downgrade abc123def456
# Rollback all migrations (dangerous!)
alembic downgrade base
Production Warning: Always test rollbacks on a copy of your production data before applying them to live systems.
Using Database in Your Application
Database Operations in Routes
Here's how to use the database in your FastAPI routes:
from fastapi import HTTPException
from app.db.database import db_dependency
from app.routers.blog.models import BlogPost, BlogCategory
@app.get("/blog/posts")
async def get_blog_posts(
db: db_dependency,
skip: int = 0,
limit: int = 10,
published_only: bool = True
):
"""Get blog posts with pagination and filtering"""
query = db.query(BlogPost)
if published_only:
query = query.filter(BlogPost.is_published == True)
posts = query.offset(skip).limit(limit).all()
return posts
@app.get("/blog/posts/{post_id}")
async def get_blog_post(post_id: int, db: db_dependency):
"""Get single blog post with author information"""
post = db.query(BlogPost).filter(BlogPost.id == post_id).first()
if not post:
raise HTTPException(status_code=404, detail="Post not found")
return post
@app.post("/blog/posts")
async def create_blog_post(
post_data: BlogPostCreate,
db: db_dependency,
current_user: User = Depends(get_current_user)
):
"""Create new blog post"""
# Create new blog post
db_post = BlogPost(
title=post_data.title,
slug=post_data.slug,
content=post_data.content,
excerpt=post_data.excerpt,
author_id=current_user.id
)
# Add to database
db.add(db_post)
db.commit()
db.refresh(db_post) # Refresh to get generated ID
return db_post
@app.put("/blog/posts/{post_id}")
async def update_blog_post(
post_id: int,
post_data: BlogPostUpdate,
db: db_dependency,
current_user: User = Depends(get_current_user)
):
"""Update existing blog post"""
post = db.query(BlogPost).filter(BlogPost.id == post_id).first()
if not post:
raise HTTPException(status_code=404, detail="Post not found")
# Check ownership
if post.author_id != current_user.id:
raise HTTPException(status_code=403, detail="Not authorized")
# Update fields
for field, value in post_data.dict(exclude_unset=True).items():
setattr(post, field, value)
db.commit()
db.refresh(post)
return post
@app.delete("/blog/posts/{post_id}")
async def delete_blog_post(
post_id: int,
db: db_dependency,
current_user: User = Depends(get_current_user)
):
"""Delete blog post"""
post = db.query(BlogPost).filter(BlogPost.id == post_id).first()
if not post:
raise HTTPException(status_code=404, detail="Post not found")
if post.author_id != current_user.id:
raise HTTPException(status_code=403, detail="Not authorized")
db.delete(post)
db.commit()
return {"message": "Post deleted successfully"}
Advanced Query Examples
from sqlalchemy import func, and_, or_
from datetime import datetime, timedelta
# Count queries
post_count = db.query(func.count(BlogPost.id)).scalar()
# Joins with relationships
posts_with_authors = db.query(BlogPost).join(User).all()
# Complex filtering
recent_posts = db.query(BlogPost).filter(
and_(
BlogPost.is_published == True,
BlogPost.created_at >= datetime.utcnow() - timedelta(days=7)
)
).all()
# Ordering and grouping
popular_posts = db.query(BlogPost).filter(
BlogPost.is_published == True
).order_by(BlogPost.created_at.desc()).limit(5).all()
# Aggregations
posts_by_author = db.query(
User.username,
func.count(BlogPost.id).label('post_count')
).join(BlogPost).group_by(User.username).all()
Best Practices and Performance
Database Performance Tips
-
Use Indexes Wisely
# Add indexes to frequently queried columns slug = Column(String(100), index=True) created_at = Column(DateTime, index=True)
-
Optimize Queries
# Use specific columns instead of SELECT * usernames = db.query(User.username).all() # Use pagination for large datasets posts = db.query(BlogPost).offset(skip).limit(limit).all()
-
Leverage Relationships
# Use SQLAlchemy relationships for cleaner code post = db.query(BlogPost).options(joinedload(BlogPost.author)).first()
Security Considerations
- Input Validation: Always validate user input before database operations
- SQL Injection Prevention: Use parameterized queries (SQLAlchemy handles this automatically)
- Access Control: Implement proper authorization checks
- Data Encryption: Store sensitive data encrypted when necessary
Development vs Production
Development:
- Use SQLite for quick local development
- Enable SQL logging for debugging
- Use smaller connection pools
Production:
- Use PostgreSQL or MySQL
- Configure connection pooling
- Enable query monitoring
- Set up database backups
Performance Monitoring: Consider using tools like SQLAlchemy's built-in logging or database-specific monitoring tools to track query performance in production.
This comprehensive database system provides everything you need to build scalable, maintainable data layers while following modern Python and FastAPI best practices.