Testing
Comprehensive testing guide for FastAPI applications with pytest, fixtures, and test organization
Testing
The FastLaunchAPI template includes a comprehensive testing framework built with pytest and FastAPI's testing utilities. This guide covers everything from basic test setup to advanced testing patterns for authentication, database operations, and API endpoints.
Overview
Testing is a critical component of any production-ready application. The FastLaunchAPI template provides a robust testing infrastructure that ensures your application remains reliable, maintainable, and bug-free as it grows.
What You'll Learn
This documentation will guide you through:
- Understanding the testing architecture and how different components work together
- Setting up and configuring the testing environment for your specific needs
- Writing comprehensive tests for authentication, database operations, and API endpoints
- Organizing tests effectively to maintain clarity and reduce maintenance overhead
- Adding tests to new routers you create as your application expands
- Using advanced testing patterns like mocking, fixtures, and database isolation
- Running and debugging tests with various configuration options
- Maintaining high code quality through coverage reports and best practices
Why This Testing Framework?
The testing setup in FastLaunchAPI is designed with real-world application needs in mind:
- 🔒 Isolation: Each test runs in complete isolation with its own database transaction
- ⚡ Speed: Optimized for fast execution with minimal setup overhead
- 🎯 Reliability: Consistent results with proper mocking of external dependencies
- 📈 Scalability: Easy to extend as your application grows
- 🛠️ Developer-Friendly: Clear patterns and comprehensive fixtures for common scenarios
Testing Philosophy
The template follows these core testing principles:
- Test Early, Test Often: Catch issues before they reach production
- Comprehensive Coverage: Test happy paths, edge cases, and error conditions
- Maintainable Tests: Clear, readable tests that serve as documentation
- Realistic Scenarios: Tests that reflect real-world usage patterns
- Fast Feedback: Quick test execution to support rapid development cycles
Testing Architecture
The testing system is designed with several key principles:
- Isolated Tests: Each test runs in its own database transaction that's rolled back after completion
- Fixtures: Reusable test data and configurations using pytest fixtures
- Mocked External Services: Email sending and other external dependencies are mocked
- Database Independence: Tests use an isolated test database to avoid conflicts
Core Testing Setup
Configuration (conftest.py)
The conftest.py
file contains the foundational testing setup, including database configuration, fixtures, and global mocks:
import pytest
import asyncio
from fastapi.testclient import TestClient
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from sqlalchemy.pool import StaticPool
from unittest.mock import patch
import os
import sys
from pathlib import Path
# Add the project root to Python path
project_root = Path(__file__).parent.parent.parent
sys.path.insert(0, str(project_root))
from main import app
from app.db.database import get_db, Base
from app.routers.auth.models import User
from app.routers.auth.services import pwd_context
# Create test database
SQLALCHEMY_DATABASE_URL = os.getenv("TEST_DATABASE_URL", "sqlite:///./test.db")
engine = create_engine(
SQLALCHEMY_DATABASE_URL,
connect_args={"check_same_thread": False},
poolclass=StaticPool,
)
TestingSessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
Key Testing Components
🗄️ Database Isolation
Each test runs in its own transaction, ensuring complete isolation
🔧 Test Fixtures
Reusable test data and configuration for consistent testing
🎭 Mocked Services
External dependencies like email are mocked for reliable testing
📊 Test Client
FastAPI TestClient for making HTTP requests in tests
Essential Fixtures
Database Fixtures
@pytest.fixture(scope="session")
def test_db():
"""Create test database tables"""
Base.metadata.create_all(bind=engine)
yield
Base.metadata.drop_all(bind=engine)
@pytest.fixture
def db_session(test_db):
"""Create a fresh database session for each test"""
connection = engine.connect()
transaction = connection.begin()
session = TestingSessionLocal(bind=connection)
yield session
session.close()
transaction.rollback()
connection.close()
@pytest.fixture
def client(db_session):
"""Create test client with dependency override"""
def override_get_db():
try:
yield db_session
finally:
pass
app.dependency_overrides[get_db] = override_get_db
try:
with TestClient(app) as test_client:
yield test_client
finally:
app.dependency_overrides.clear()
The db_session
fixture creates a fresh database session for each test,
wrapped in a transaction that's rolled back after the test completes. This
ensures complete isolation between tests.
User Fixtures
The template includes several user fixtures for different testing scenarios:
@pytest.fixture
def test_user(db_session):
"""Create a verified test user"""
hashed_password = pwd_context.hash("testpassword123")
user = User(
username="testuser",
email="test@example.com",
hashed_password=hashed_password,
is_verified=True
)
db_session.add(user)
db_session.commit()
db_session.refresh(user)
return user
@pytest.fixture
def unverified_user(db_session):
"""Create an unverified test user"""
hashed_password = pwd_context.hash("testpassword123")
user = User(
username="unverified",
email="unverified@example.com",
hashed_password=hashed_password,
is_verified=False
)
db_session.add(user)
db_session.commit()
db_session.refresh(user)
return user
@pytest.fixture
def admin_user(db_session):
"""Create an admin user for testing"""
hashed_password = pwd_context.hash("adminpassword123")
user = User(
username="admin",
email="admin@example.com",
hashed_password=hashed_password,
is_verified=True,
is_admin=True
)
db_session.add(user)
db_session.commit()
db_session.refresh(user)
return user
Authentication Fixtures
@pytest.fixture
def auth_headers(client, test_user):
"""Get authentication headers for test user"""
response = client.post(
"/auth/token",
data={
"username": test_user.username,
"password": "testpassword123"
}
)
token = response.json()["access_token"]
return {"Authorization": f"Bearer {token}"}
The auth_headers
fixture automatically logs in the test user and returns the
authorization headers needed for authenticated requests.
Test Organization
Router-Specific Tests
Tests are organized by router/feature, with comprehensive coverage for each endpoint:
class TestUserRegistration:
"""Test user registration endpoints"""
def test_create_user_success(self, client):
"""Test successful user creation"""
user_data = {
"username": "newuser",
"email": "newuser@example.com",
"password": "securepassword123"
}
response = client.post("/auth/create-user", json=user_data)
assert response.status_code == 201
assert "User created" in response.json()["message"]
def test_create_user_duplicate_username(self, client, test_user):
"""Test user creation with duplicate username"""
user_data = {
"username": test_user.username,
"email": "different@example.com",
"password": "securepassword123"
}
response = client.post("/auth/create-user", json=user_data)
assert response.status_code == 400
assert "Username already taken" in response.json()["detail"]
Test Categories
Purpose: Test individual functions and methods in isolation
def test_password_hashing():
"""Test password hashing function"""
password = "testpassword123"
hashed = pwd_context.hash(password)
assert pwd_context.verify(password, hashed)
assert not pwd_context.verify("wrongpassword", hashed)
Purpose: Test how different components work together
def test_user_registration_flow(client, db_session):
"""Test complete user registration flow"""
# Create user
user_data = {
"username": "newuser",
"email": "newuser@example.com",
"password": "securepassword123"
}
response = client.post("/auth/create-user", json=user_data)
assert response.status_code == 201
# Verify user in database
user = db_session.query(User).filter_by(username="newuser").first()
assert user is not None
assert user.email == "newuser@example.com"
assert not user.is_verified # Should be unverified initially
Purpose: Test API endpoints and HTTP responses
def test_get_user_profile(client, auth_headers, test_user):
"""Test getting user profile"""
response = client.get("/auth/get-user", headers=auth_headers)
assert response.status_code == 200
data = response.json()
assert data["username"] == test_user.username
assert data["email"] == test_user.email
Writing Tests for New Routers
When you add new routers to your FastLaunchAPI application, you'll need to create comprehensive tests to ensure they work correctly. This section walks you through the process of setting up tests for a new router.
Create Test Structure
First, create the directory structure and test file for your new router:
mkdir -p tests/routers/your_router
touch tests/routers/your_router/test_your_router.py
The test file should follow the naming convention test_[router_name].py
to be automatically discovered by pytest.
Set Up Test Dependencies
Add the necessary imports and dependencies to your test file:
import pytest
from fastapi import status
from unittest.mock import patch, Mock
# Import your router models and services
from app.routers.your_router.models import YourModel
from app.routers.your_router.services import your_service
from app.routers.your_router.schemas import YourSchema
Import only what you need for your tests. This keeps the test file clean and makes dependencies clear.
Create Router-Specific Fixtures
Define fixtures that create test data specific to your router:
@pytest.fixture
def sample_data(db_session):
"""Create sample data for your router tests"""
data = YourModel(
field1="value1",
field2="value2",
user_id=1, # Link to test user if needed
# ... other fields
)
db_session.add(data)
db_session.commit()
db_session.refresh(data)
return data
@pytest.fixture
def multiple_items(db_session, test_user):
"""Create multiple test items for list/pagination tests"""
items = []
for i in range(5):
item = YourModel(
field1=f"value{i}",
field2=f"description{i}",
user_id=test_user.id
)
db_session.add(item)
items.append(item)
db_session.commit()
for item in items:
db_session.refresh(item)
return items
@pytest.fixture
def mock_external_service():
"""Mock external service calls"""
with patch('app.routers.your_router.services.external_service') as mock:
mock.return_value = {"status": "success", "data": "test"}
yield mock
Write Test Classes
Organize your tests into logical classes based on functionality:
class TestYourRouterCRUD:
"""Test CRUD operations for your router"""
def test_create_item_success(self, client, auth_headers):
"""Test successful item creation"""
data = {
"field1": "value1",
"field2": "value2"
}
response = client.post("/your-router/items", json=data, headers=auth_headers)
assert response.status_code == status.HTTP_201_CREATED
response_data = response.json()
assert response_data["field1"] == "value1"
assert response_data["field2"] == "value2"
assert "id" in response_data
def test_create_item_validation_error(self, client, auth_headers):
"""Test item creation with invalid data"""
data = {
"field1": "", # Invalid empty field
"field2": "value2"
}
response = client.post("/your-router/items", json=data, headers=auth_headers)
assert response.status_code == status.HTTP_422_UNPROCESSABLE_ENTITY
def test_get_item_success(self, client, auth_headers, sample_data):
"""Test retrieving a specific item"""
response = client.get(f"/your-router/items/{sample_data.id}", headers=auth_headers)
assert response.status_code == status.HTTP_200_OK
response_data = response.json()
assert response_data["id"] == sample_data.id
assert response_data["field1"] == sample_data.field1
def test_get_item_not_found(self, client, auth_headers):
"""Test retrieving a non-existent item"""
response = client.get("/your-router/items/999999", headers=auth_headers)
assert response.status_code == status.HTTP_404_NOT_FOUND
def test_update_item_success(self, client, auth_headers, sample_data):
"""Test updating an existing item"""
update_data = {
"field1": "updated_value"
}
response = client.patch(
f"/your-router/items/{sample_data.id}",
json=update_data,
headers=auth_headers
)
assert response.status_code == status.HTTP_200_OK
response_data = response.json()
assert response_data["field1"] == "updated_value"
def test_delete_item_success(self, client, auth_headers, sample_data):
"""Test deleting an item"""
response = client.delete(
f"/your-router/items/{sample_data.id}",
headers=auth_headers
)
assert response.status_code == status.HTTP_204_NO_CONTENT
# Verify item was deleted
get_response = client.get(
f"/your-router/items/{sample_data.id}",
headers=auth_headers
)
assert get_response.status_code == status.HTTP_404_NOT_FOUND
class TestYourRouterAuth:
"""Test authentication and authorization requirements"""
def test_unauthorized_access(self, client):
"""Test that endpoints require authentication"""
response = client.get("/your-router/items")
assert response.status_code == status.HTTP_401_UNAUTHORIZED
def test_invalid_token(self, client):
"""Test access with invalid token"""
headers = {"Authorization": "Bearer invalid_token"}
response = client.get("/your-router/items", headers=headers)
assert response.status_code == status.HTTP_401_UNAUTHORIZED
def test_user_can_only_access_own_items(self, client, auth_headers, sample_data, db_session):
"""Test that users can only access their own items"""
# Create another user's item
from app.routers.auth.models import User
other_user = User(
username="otheruser",
email="other@example.com",
hashed_password="hashed",
is_verified=True
)
db_session.add(other_user)
db_session.commit()
other_item = YourModel(
field1="other_value",
field2="other_description",
user_id=other_user.id
)
db_session.add(other_item)
db_session.commit()
# Try to access other user's item
response = client.get(f"/your-router/items/{other_item.id}", headers=auth_headers)
assert response.status_code == status.HTTP_404_NOT_FOUND
class TestYourRouterList:
"""Test list endpoints with pagination and filtering"""
def test_list_items_success(self, client, auth_headers, multiple_items):
"""Test listing items"""
response = client.get("/your-router/items", headers=auth_headers)
assert response.status_code == status.HTTP_200_OK
response_data = response.json()
assert len(response_data) == 5
def test_list_items_pagination(self, client, auth_headers, multiple_items):
"""Test pagination in list endpoint"""
response = client.get("/your-router/items?page=1&size=2", headers=auth_headers)
assert response.status_code == status.HTTP_200_OK
response_data = response.json()
assert len(response_data) == 2
def test_list_items_filtering(self, client, auth_headers, multiple_items):
"""Test filtering in list endpoint"""
response = client.get("/your-router/items?field1=value1", headers=auth_headers)
assert response.status_code == status.HTTP_200_OK
response_data = response.json()
assert len(response_data) == 1
assert response_data[0]["field1"] == "value1"
class TestYourRouterExternalServices:
"""Test integration with external services"""
def test_external_service_success(self, client, auth_headers, mock_external_service):
"""Test successful external service integration"""
data = {"field1": "value1"}
response = client.post("/your-router/external-action", json=data, headers=auth_headers)
assert response.status_code == status.HTTP_200_OK
mock_external_service.assert_called_once()
def test_external_service_failure(self, client, auth_headers):
"""Test handling of external service failures"""
with patch('app.routers.your_router.services.external_service') as mock:
mock.side_effect = Exception("Service unavailable")
data = {"field1": "value1"}
response = client.post("/your-router/external-action", json=data, headers=auth_headers)
assert response.status_code == status.HTTP_500_INTERNAL_SERVER_ERROR
Add Edge Case Tests
Include tests for edge cases and error scenarios:
class TestYourRouterEdgeCases:
"""Test edge cases and error scenarios"""
def test_create_item_with_special_characters(self, client, auth_headers):
"""Test creating items with special characters"""
data = {
"field1": "Special: !@#$%^&*()",
"field2": "Unicode: 你好世界"
}
response = client.post("/your-router/items", json=data, headers=auth_headers)
assert response.status_code == status.HTTP_201_CREATED
response_data = response.json()
assert response_data["field1"] == "Special: !@#$%^&*()"
assert response_data["field2"] == "Unicode: 你好世界"
def test_create_item_with_max_length(self, client, auth_headers):
"""Test creating items with maximum allowed length"""
max_length_value = "a" * 255 # Assuming max length is 255
data = {
"field1": max_length_value,
"field2": "description"
}
response = client.post("/your-router/items", json=data, headers=auth_headers)
assert response.status_code == status.HTTP_201_CREATED
def test_create_item_exceeding_max_length(self, client, auth_headers):
"""Test creating items exceeding maximum length"""
too_long_value = "a" * 256 # Exceeding max length
data = {
"field1": too_long_value,
"field2": "description"
}
response = client.post("/your-router/items", json=data, headers=auth_headers)
assert response.status_code == status.HTTP_422_UNPROCESSABLE_ENTITY
Update Test Configuration
If your router requires specific test configuration, add it to your test file:
# At the top of your test file
pytestmark = pytest.mark.asyncio # If using async tests
# Or add router-specific markers
pytestmark = [
pytest.mark.asyncio,
pytest.mark.slow, # If tests are slow
pytest.mark.integration # If tests require external services
]
After following these steps, you'll have comprehensive test coverage for your new router. Remember to run the tests frequently during development to catch issues early.
Always test both success and failure scenarios. Error handling is just as important as happy path functionality.
Testing Patterns
Database Testing
Always use the db_session
fixture for database operations in tests. This
ensures proper transaction isolation and cleanup.
def test_database_operation(db_session, test_user):
"""Test database operations with proper isolation"""
# Create test data
item = YourModel(user_id=test_user.id, name="Test Item")
db_session.add(item)
db_session.commit()
# Test the operation
retrieved_item = db_session.query(YourModel).filter_by(name="Test Item").first()
assert retrieved_item is not None
assert retrieved_item.user_id == test_user.id
# No cleanup needed - transaction is rolled back automatically
Mocking External Services
def test_external_service_integration(client, auth_headers):
"""Test integration with external services"""
with patch('app.routers.your_router.services.external_api_call') as mock_api:
mock_api.return_value = {"status": "success", "data": "test"}
response = client.post("/your-router/external-action", headers=auth_headers)
assert response.status_code == 200
mock_api.assert_called_once()
Error Handling Tests
def test_error_handling(client, auth_headers):
"""Test proper error handling"""
# Test validation error
invalid_data = {"field1": ""} # Missing required field
response = client.post("/your-router/items", json=invalid_data, headers=auth_headers)
assert response.status_code == 422
# Test not found error
response = client.get("/your-router/items/999999", headers=auth_headers)
assert response.status_code == 404
Running Tests
Basic Test Commands
# Run all tests
pytest
# Run with verbose output
pytest -v
# Run with coverage report
pytest --cov=app
# Run specific test file
pytest tests/test_auth.py
# Run specific test class
pytest tests/test_auth.py::TestUserRegistration
# Run specific test method
pytest tests/test_auth.py::TestUserRegistration::test_create_user_success
# Run tests matching a pattern
pytest -k "test_create"
# Run tests with specific markers
pytest -m "slow"
# Skip certain tests
pytest -k "not slow"
Test Configuration
Configure pytest behavior in pytest.ini
:
[tool:pytest]
testpaths = tests
python_files = test_*.py
python_classes = Test*
python_functions = test_*
addopts = -v --tb=short
markers =
slow: marks tests as slow (deselect with '-m "not slow"')
integration: marks tests as integration tests
unit: marks tests as unit tests
Best Practices
Test Writing Guidelines
🎯 Test One Thing
Each test should focus on testing one specific behavior or scenario
📝 Descriptive Names
Use clear, descriptive test names that explain what is being tested
🔄 Arrange-Act-Assert
Structure tests with clear setup, action, and assertion phases
🚫 No Test Dependencies
Tests should be independent and not rely on the order of execution
Common Testing Patterns
def test_user_can_update_profile(client, auth_headers, test_user):
"""Test that authenticated users can update their profile"""
# Arrange
update_data = {
"username": "updated_username",
"email": "updated@example.com"
}
# Act
response = client.patch("/auth/update-user", json=update_data, headers=auth_headers)
# Assert
assert response.status_code == 200
updated_user = response.json()
assert updated_user["username"] == "updated_username"
assert updated_user["email"] == "updated@example.com"
Testing Edge Cases
def test_create_user_with_edge_cases(client):
"""Test user creation with various edge cases"""
# Test with minimum valid data
minimal_data = {
"username": "a",
"email": "a@b.co",
"password": "password123"
}
response = client.post("/auth/create-user", json=minimal_data)
assert response.status_code == 201
# Test with maximum length data
long_username = "a" * 50
long_data = {
"username": long_username,
"email": "test@example.com",
"password": "password123"
}
response = client.post("/auth/create-user", json=long_data)
# Assert based on your validation rules
Troubleshooting Common Issues
Database Connection Issues
Problem: Tests fail with database connection errors
Solution: Ensure your test database URL is correctly configured:
# In conftest.py
SQLALCHEMY_DATABASE_URL = os.getenv("TEST_DATABASE_URL", "sqlite:///./test.db")
# Or use a separate test database
SQLALCHEMY_DATABASE_URL = "postgresql://user:pass@localhost/test_db"
Test Isolation Issues
Problem: Tests are affecting each other or failing inconsistently
Solution: Ensure proper use of database transactions:
@pytest.fixture
def db_session(test_db):
"""Properly isolated database session"""
connection = engine.connect()
transaction = connection.begin()
session = TestingSessionLocal(bind=connection)
yield session
# Cleanup is crucial
session.close()
transaction.rollback()
connection.close()
Mock Issues
Solution: Use the correct patch target:
# Mock at the point of use, not where it's defined
with patch('app.routers.auth.services.send_email') as mock_send:
# Not patch('app.email.sender.send_email')
pass
Test Coverage and Quality
Coverage Reports
Generate detailed coverage reports:
# Generate coverage report
pytest --cov=app --cov-report=html
# View coverage in browser
open htmlcov/index.html
# Generate terminal coverage report
pytest --cov=app --cov-report=term-missing
Coverage Goals
Aim for 80-90% code coverage for critical paths like authentication, payments, and core business logic. 100% coverage isn't always necessary or practical.
The testing framework in FastLaunchAPI provides a solid foundation for maintaining high code quality and catching issues early. By following these patterns and best practices, you can ensure your application remains reliable as it grows and evolves.