Introduction

pgschema
differentiates itself from other declarative tools by:
- Comprehensive Postgres Support: Handles virtually all schema-level database objects - tables, indexes, views, functions, procedures, triggers, policies, types, and more. Thoroughly tested against Postgres versions 14 through 17.
- Schema-Level Focus: Designed for real-world Postgres usage patterns, from single-schema applications to multi-tenant architectures.
- Terraform-Like Plan-Review-Apply Workflow: Generate a detailed migration plan before execution, review changes in human-readable, SQL, or JSON formats, then apply with confidence. No surprises in production.
- No Migration Table Needed: As a state-based tool, it determines what to change by comparing your schema files with the actual database state - no need to track migration history in a special table.
- Concurrent Change Detection: Built-in fingerprinting ensures the database hasn’t changed between planning and execution.
- Online DDL Support: Automatically uses PostgreSQL’s non-blocking strategies to minimize downtime during schema changes.
- Adaptive Transaction: Intelligently wraps migrations in transactions when possible, with automatic handling of operations like concurrent index creation that require special treatment.
- Modular Schema Organization: Supports multi-file schema definitions for better team collaboration and ownership.
- Dependency Management: Automatically resolves complex dependencies between database objects using topological sorting, ensuring operations execute in the correct order.
- No Shadow Database Required: Unlike other declarative tools, it works directly with your schema files and target database - no temporary databases, no extra infrastructure.
Comprehensive Postgres Support
pgschema
supports a wide range of Postgres features, including:
- Tables, columns, and constraints (primary keys, foreign keys, unique constraints, check constraints)
- Indexes (including partial, functional, and concurrent indexes)
- Views and materialized views
- Functions and stored procedures
- Custom types and domains
- Schemas and permissions
- Row-level security (RLS) policies
- Triggers and sequences
- Comments on database objects
Schema-Level Migration
Unlike many migration tools that operate at the database level,pgschema
is designed to work at the schema level.
Why Schema-Level?
Single Schema Simplicity: Most Postgres applications use only the defaultpublic
schema. For these cases, schema-level operations eliminate unnecessary complexity while providing all the functionality needed for effective schema management.
Multi-Tenant Architecture Support: For applications using multiple schemas, the predominant pattern is schema-per-tenant architecture - where each customer or tenant gets their own schema within the same database. This schema-level approach enables tenant schema reconciliation - ensuring all tenant schemas stay in sync with the canonical schema definition.
Schema-Agnostic Migrations
The tool intelligently handles schema qualifiers to create portable, schema-agnostic dumps and migrations:- Strips schema qualifiers from table and index names (notice
Schema: -
in comments) - Removes schema prefixes from object references (
users
instead oftenant_123.users
) - Preserves cross-schema type references where needed (
public.user_role
for shared types) - Creates portable DDL that can be applied to any target schema
Declarative Workflow
1. Dump: Extract Current Schema
2. Edit: Define Desired State
Edit the dumped schema file to reflect your desired changes. Instead of writingALTER TABLE
statements, you write CREATE TABLE
definitions to specify the desired end state.
3. Plan: Preview Changes
- Human Format
- JSON Format
- SQL Format
The human-readable format is perfect for reviewing changes during development
4. Apply: Execute Changes
No Migration Table Needed
Traditional imperative migration tools maintain a migration history table (often calledmigrations
, schema_migrations
, or similar) to track which migration scripts have been executed. This approach has several limitations:
- History Tracking Overhead: Requires maintaining and syncing a special table across all environments
- Script Ordering Dependencies: Migration files must be carefully numbered and never modified once applied
- Drift Detection Gaps: Can’t detect manual schema changes made outside the migration tool
- State Reconstruction Complexity: Understanding current schema requires replaying all historical migrations
State-Based Approach
pgschema
eliminates these complexities through a state-based approach. Instead of tracking which changes were applied, it determines what needs to change by:
- Reading Desired State: Parse your schema files to understand what you want
- Inspecting Current State: Query the database to understand what currently exists
- Computing Differences: Compare the two states to determine necessary changes
- No special tables required - your database schema speaks for itself
- Manual changes detected - any drift from desired state is automatically identified
- Schema files are authoritative - they represent the complete truth, not just incremental changes
- Environment agnostic - the same schema files work regardless of how the database reached its current state
How It Handles Real-World Scenarios
Scenario 1: Emergency HotfixConcurrent Change Detection
The separation betweenplan
and apply
phases creates a critical safety feature but also introduces a potential risk: what if the database changes between planning and execution? This time window could lead to applying outdated or conflicting migrations.
The tool solves this with fingerprinting - a cryptographic mechanism that ensures the exact database state you planned against is the same state you’re applying changes to.
How Fingerprinting Works
During Plan Generation: When you runpgschema plan
, it calculates a cryptographic fingerprint of the current database schema state and embeds it in the plan file:
pgschema apply
recalculates the current database fingerprint and compares it with the stored fingerprint from the plan.
Safety in Action
This fingerprinting prevents dangerous scenarios:Recover Fingerprint Mismatch
When a fingerprint mismatch occurs, the right approach is to reestablish the baseline with the current database state:Online DDL Operations
The tool automatically uses PostgreSQL’s non-blocking features to minimize downtime during schema changes, including concurrent index creation, NOT VALID constraint patterns, and safe NOT NULL additions. Example - Concurrent Index Creation: Index operations automatically useCREATE INDEX CONCURRENTLY
to avoid blocking table writes:
pgschema:wait
directive blocks migration execution, polls the database to monitor progress, and automatically continues when operations complete.
For comprehensive details, see the Online DDL documentation.
Adaptive Transaction
The system automatically determines transaction boundaries based on the SQL operations in your migration plan. Each diff in the plan contains acan_run_in_transaction
field that indicates whether that particular SQL can run inside a transaction.
plan.json snippet
Smart Transaction Wrapping
- If all diffs can run in a transaction: The entire plan is wrapped in a single transaction for maximum atomicity
- If any diff cannot run in a transaction: Each diff runs in its own transaction to maintain isolation
CREATE INDEX CONCURRENTLY
cannot run inside a transaction.
Modular Schema Organization
For large applications and teams, managing your database schema as a single monolithic file becomes unwieldy. The modular approach transforms your schema into an organized structure that enables better collaboration, clearer ownership, and easier maintenance.Breaking Down the Monolith
Instead of a single large schema file, the tool can organize your database objects into logical, manageable files with the--multi-file
option:
main.sql
file serves as the entry point,
containing Postgres \i
(include) directives that reference individual component files.
- Directory Structure
- main.sql
- functions/update_timestamp.sql
Team Collaboration Benefits
This modular approach enables powerful team collaboration patterns: Granular Ownership: Use GitHub’s CODEOWNERS to assign different teams to different parts of your schema:Same Workflow, Better Organization
The declarative workflow remains identical - you’re still defining desired state and letting the tool generate the migration plan:Dependency Management
As schemas grow in complexity, manually ordering DDL statements becomes increasingly error-prone. A single schema file with hundreds of objects becomes unmanageable—you’re constantly scrolling and reordering statements to satisfy dependencies. The modular schema approach introduces even greater challenges. With schema files spread across directories, tracking dependencies between objects becomes nearly impossible. A trigger intriggers/audit.sql
might depend on a function in functions/timestamps.sql
, which itself references types defined in types/custom.sql
.
Database objects often depend on each other—triggers need functions, views reference tables, foreign keys require referenced tables to exist first. The tool automatically handles these dependencies using topological sorting to ensure operations execute in the correct order, regardless of how your schema files are organized.
Automatic Dependency Resolution
Consider this modular schema example where you’re adding a table with a trigger that depends on a function- Modular Schema Files
- Generated Migration
- Tables are created before their indexes, constraints, and triggers
- Functions are created before triggers that use them
- Referenced tables exist before foreign key constraints
- Views are created after all their dependent tables and functions
No Shadow Database Required
Unlike many declarative schema migration tools, this approach doesn’t require a separate “shadow” or “dev” database to compute schema differences. Other tools typically create a temporary database, apply your desired schema to it, then compare this shadow database with your target database to generate migrations. This shadow database approach has several drawbacks:- Counterintuitive Configuration: Most developers stumble on the additional shadow database flags and need to read documentation to understand what they’re used for
- Complex Setup: Requires permissions for temporary database creation and cleanup of temporary databases
- Additional Infrastructure: Demands extra database resources, maintenance, and can cause inconsistencies with different Postgres versions or extensions
IR Normalization
This tool eliminates these complexities by working directly with your schema files and target database through an Intermediate Representation (IR) system:pg_class
, pg_attribute
, pg_constraint
, etc.) and converted into the same normalized IR format.
3. Direct Comparison: The diff engine compares these two normalized IR structures directly, identifying what changes are needed without requiring any temporary databases.
4. Migration Generation: The differences are converted into properly ordered DDL statements that transform the current state into the desired state.
The IR normalization ensures that regardless of whether schema information comes from SQL parsing or database introspection, it’s represented consistently for accurate comparison - enabling precise migrations without the overhead and complexity of shadow databases.
Acknowledgements
pgschema
wouldn’t exist without these amazing open source projects:
- sqlc - For generating code to extract schema from target databases
- pg_query_go - Go bindings for libpg_query, enabling reliable Postgres SQL parsing. In fact, sqlc also uses pg_query_go under the hood! It’s the secret weapon in the Postgres ecosystem that allows tools to build near-perfect Postgres compatibility.
- testcontainers - Enabling comprehensive integration testing with real Postgres instances
- Terraform - For figuring out the right Plan-Review-Apply workflow that developers love
- pg-schema-diff - Reading GitHub issues to understand use cases and studying the codebase. For example, I knew sqlc beforehand, but when I saw pg-schema-diff uses sqlc to fetch schema information, it immediately clicked.
Ready to give it a try?
- Installation
- Join Discord or open GitHub Issues