# Direct CLI demo with Manus and asciinema Source: https://www.pgschema.com/blog/demo-with-manus-and-asciinema Using Manus AI agent to script and record a polished CLI demo with asciinema, complete with human-like typing and proper pauses. ## Background When demonstrating CLI tools like pgschema, [asciinema](https://asciinema.org) is a good way to record the demo. I started by creating the recording manually. However, even though I had prepared and practiced, the manual recording still included typos and pauses. Not a smooth experience. Then it occurred to me that I could use AI to script the demo. ## Process [Manus](https://manus.im) is a general AI agent. I have been using it for various research tasks. I watch it spin up browsers, click links, and extract information. Scripting a terminal demo is a natural extension of its capabilities. ### 1st Recording: Too fast I started by giving Manus this instruction. ```text theme={null} Read https://www.pgschema.com/. Use asciinema to record a cast to demonstrate. Dump, edit, plan, apply workflow ``` Since pgschema is new, I needed to explicitly provide the URL. On the other hand, I assumed asciinema is well-known and doesn't need further introduction. It was doing the right thing, but too fast—the demo finished in the blink of an eye. Well, it's AI after all. ### 2nd Recording: Add delays I then gently asked Manus to simulate human typing behavior: ```text theme={null} You are doing it too fast. Please simulate human typing behavior. This way the cast is more realistic and people can follow along. ``` There were some problems along the way—for example, at one point it didn't generate the cast file as instructed at all. After some additional tweaking, it was finally on track. ### 3rd Recording: Polish There were still some rough edges. In particular, the demo added a `CREATE TABLE` change, but to showcase the declarative approach, it would be better to demonstrate an `ALTER TABLE` change—essentially comparing two `CREATE TABLE` statements and generating the `ALTER TABLE` migration. This is my prompt: ```text theme={null} This is too short, we still need to: 1. After dumping, show the initial schema 2. Show exactly what we change in the schema 3. Apply, then confirm with a small pause so it's visible to the viewer 4. Dump and show the schema after the change Also, to show the declarative approach, we should add a column instead of creating a table. Creating a table can't demonstrate the difference of the declarative approach. ``` ### Found a Bug Now Manus generated a near-perfect demo with: 1. Clear workflow with all requested steps 2. Proper pauses 3. Color highlighting 4. Readable text The imperfect part wasn't caused by Manus, but my pgschema. ![\_](https://raw.githubusercontent.com/pgplex/pgschema/main/docs/images/blog/demo-with-manus-and-asciinema/bad-newline.webp) The generated dump didn't end with a newline. This was obvious from the playback. ### Final Recording After I fixed the bug, I asked Manus to generate the final version. ```text theme={null} This pgschema version has a bug where the dump doesn't end with a newline. I have fixed the issue. Please force pull the latest pgschema and rerun the same script. ``` This time, Manus delivered a clean demo as shown below. ## Reflection I have made the entire Manus session public and you can check it [here](https://manus.im/share/8fEln1OzxpnsRSU1PnHweG?replay=1). Manus didn't just execute commands—it interpreted my intent, made mistakes, course-corrected, and even helped me discover a bug in my own software. Fittingly, the first feature request for pgschema was to make it more AI coding friendly: ![\_](https://raw.githubusercontent.com/pgplex/pgschema/main/docs/images/blog/demo-with-manus-and-asciinema/ai-feature-request.webp) An AI-built codebase, demoed by an AI agent, for an audience that will likely include developers working alongside AI. Perhaps this is just a glimpse of how software development is evolving. # pgschema: Postgres Declarative Schema Migration, like Terraform Source: https://www.pgschema.com/blog/pgschema-postgres-declarative-schema-migration-like-terraform Introducing pgschema, a declarative schema migration tool for PostgreSQL with Terraform-style plan-review-apply workflow, no migration table, and comprehensive support for all schema objects. ## Introduction pgschema logo While database migrations have traditionally been imperative (writing step-by-step migration scripts), declarative solutions are gaining traction. `pgschema` differentiates itself from other declarative tools by: * **Comprehensive Postgres Support**: Handles virtually all schema-level database objects - tables, indexes, views, functions, procedures, triggers, policies, types, and more. Thoroughly tested against Postgres versions 14 through 17. * **Schema-Level Focus**: Designed for real-world Postgres usage patterns, from single-schema applications to multi-tenant architectures. * **Terraform-Like Plan-Review-Apply Workflow**: Generate a detailed migration plan before execution, review changes in human-readable, SQL, or JSON formats, then apply with confidence. No surprises in production. * **No Migration Table Needed**: As a state-based tool, it determines what to change by comparing your schema files with the actual database state - no need to track migration history in a special table. * **Concurrent Change Detection**: Built-in fingerprinting ensures the database hasn't changed between planning and execution. * **Online DDL Support**: Automatically uses PostgreSQL's non-blocking strategies to minimize downtime during schema changes. * **Adaptive Transaction**: Intelligently wraps migrations in transactions when possible, with automatic handling of operations like concurrent index creation that require special treatment. * **Modular Schema Organization**: Supports multi-file schema definitions for better team collaboration and ownership. * **Dependency Management**: Automatically resolves complex dependencies between database objects using topological sorting, ensuring operations execute in the correct order. * **No Shadow Database Required**: Unlike other declarative tools, it works directly with your schema files and target database - no temporary databases, no extra infrastructure. ## Comprehensive Postgres Support `pgschema` supports a wide range of [Postgres features](/syntax/create_table), including: * Tables, columns, and constraints (primary keys, foreign keys, unique constraints, check constraints) * Indexes (including partial, functional, and concurrent indexes) * Views and materialized views * Functions and stored procedures * Custom types and domains * Schemas and permissions * Row-level security (RLS) policies * Triggers and sequences * Comments on database objects The tool is thoroughly tested against [Postgres versions 14 through 17](https://github.com/pgplex/pgschema/blob/a41ffe8616f430ba36e6b39982c4455632a0dfa7/.github/workflows/release.yml#L44-L47) with [extensive test suites](https://github.com/pgplex/pgschema/tree/main/testdata). ## Schema-Level Migration Unlike many migration tools that operate at the database level, `pgschema` is designed to work at the **schema level**. ### Why Schema-Level? **Single Schema Simplicity**: Most Postgres applications use only the default `public` schema. For these cases, schema-level operations eliminate unnecessary complexity while providing all the functionality needed for effective schema management. **Multi-Tenant Architecture Support**: For applications using multiple schemas, the predominant pattern is **schema-per-tenant architecture** - where each customer or tenant gets their own schema within the same database. This schema-level approach enables [tenant schema reconciliation](/workflow/tenant-schema) - ensuring all tenant schemas stay in sync with the canonical schema definition. ### Schema-Agnostic Migrations The tool intelligently handles schema qualifiers to create portable, schema-agnostic dumps and migrations: ```bash theme={null} # Dump from 'tenant_123' schema pgschema dump \ --host localhost \ --user postgres \ --db myapp \ --schema tenant_123 \ > schema.sql # Apply the same schema definition to 'tenant_456' pgschema plan \ --host localhost \ --user postgres \ --db myapp \ --schema tenant_456 \ --file schema.sql ``` Here's what a schema-level dump looks like - notice how it creates **schema-agnostic SQL**: ```sql theme={null} -- -- pgschema database dump -- -- -- Name: users; Type: TABLE; Schema: -; Owner: - -- CREATE TABLE IF NOT EXISTS users ( id SERIAL PRIMARY KEY, username varchar(100) NOT NULL, email varchar(100) NOT NULL, role public.user_role DEFAULT 'user', status public.status DEFAULT 'active', created_at timestamp DEFAULT now() ); -- -- Name: posts; Type: TABLE; Schema: -; Owner: - -- CREATE TABLE IF NOT EXISTS posts ( id SERIAL PRIMARY KEY, user_id integer REFERENCES users(id), title varchar(200) NOT NULL, content text, created_at timestamp DEFAULT now() ); ``` The tool automatically: * **Strips schema qualifiers** from table and index names (notice `Schema: -` in comments) * **Removes schema prefixes** from object references (`users` instead of `tenant_123.users`) * **Preserves cross-schema type references** where needed (`public.user_role` for shared types) * **Creates portable DDL** that can be applied to any target schema ## Declarative Workflow This workflow mirrors Terraform's approach and fits [GitOps](/workflow/gitops) nicely: ### 1. Dump: Extract Current Schema ```bash theme={null} pgschema dump \ --host localhost \ --user postgres \ --db myapp \ > schema.sql ``` This extracts your current database schema into a clean, readable SQL file that represents your current state. ### 2. Edit: Define Desired State Edit the dumped schema file to reflect your desired changes. Instead of writing `ALTER TABLE` statements, you write `CREATE TABLE` definitions to specify the desired end state. ### 3. Plan: Preview Changes ```bash theme={null} pgschema plan \ --host localhost \ --user postgres \ --db myapp \ --file schema.sql \ --output-human \ --output-json plan.json \ --output-sql plan.sql ``` This compares your desired schema (from the file) with the current database state and generates a migration plan. Here's where the tool differentiates from Terraform—the plan supports multiple output formats: The human-readable format is perfect for reviewing changes during development ``` Plan: 1 to alter. Summary by type: tables: 1 to alter Tables: ~ users Transaction: true DDL to be executed: -------------------------------------------------- ALTER TABLE users ADD COLUMN name varchar(100); ``` The JSON format enables easy integration with CI/CD pipelines and approval workflows ```json theme={null} { "version": "1.0.0", "pgschema_version": "1.0.0", "created_at": "2025-08-13T10:30:15+08:00", "source_fingerprint": { "hash": "7a8b9c1d2e3f4a5b6c7d8e9f0a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d6e7f8a9b" }, "diffs": [ { "sql": "ALTER TABLE users ADD COLUMN name varchar(100);", "type": "table", "operation": "alter", "path": "public.users", "source": { "schema": "public", "name": "users", "type": "BASE_TABLE", "columns": [ { "name": "id", "position": 1, "data_type": "integer", "is_nullable": false }, { "name": "email", "position": 2, "data_type": "character varying", "character_maximum_length": 255, "is_nullable": false }, { "name": "created_at", "position": 3, "data_type": "timestamp without time zone", "is_nullable": true, "column_default": "now()" }, { "name": "name", "position": 4, "data_type": "character varying", "character_maximum_length": 100, "is_nullable": true } ], "can_run_in_transaction": true } } ] } ``` The SQL format shows the exact SQL to be executed, enabling custom tooling to inspect and analyze the changes ```sql theme={null} ALTER TABLE users ADD COLUMN name varchar(100); ``` ### 4. Apply: Execute Changes ```bash theme={null} pgschema apply \ --plan plan.json \ --host localhost \ --user postgres \ --db myapp ``` This executes the planned changes safely, with built-in protections against concurrent schema modifications and proper transaction handling. ## No Migration Table Needed Traditional imperative migration tools maintain a migration history table (often called `migrations`, `schema_migrations`, or similar) to track which migration scripts have been executed. This approach has several limitations: * **History Tracking Overhead**: Requires maintaining and syncing a special table across all environments * **Script Ordering Dependencies**: Migration files must be carefully numbered and never modified once applied * **Drift Detection Gaps**: Can't detect manual schema changes made outside the migration tool * **State Reconstruction Complexity**: Understanding current schema requires replaying all historical migrations ### State-Based Approach `pgschema` eliminates these complexities through a **state-based** approach. Instead of tracking which changes were applied, it determines what needs to change by: 1. **Reading Desired State**: Parse your schema files to understand what you want 2. **Inspecting Current State**: Query the database to understand what currently exists 3. **Computing Differences**: Compare the two states to determine necessary changes This means: * **No special tables required** - your database schema speaks for itself * **Manual changes detected** - any drift from desired state is automatically identified * **Schema files are authoritative** - they represent the complete truth, not just incremental changes * **Environment agnostic** - the same schema files work regardless of how the database reached its current state ### How It Handles Real-World Scenarios **Scenario 1: Emergency Hotfix** ```bash theme={null} # DBA makes emergency manual change in production psql -c "ALTER TABLE users ADD COLUMN emergency_flag boolean DEFAULT false;" # Later, developer updates schema file and plans migration # Tool detects the column already exists and adjusts the plan accordingly pgschema plan --file schema.sql --output-json plan.json # Plan shows: No changes needed (current state matches desired state) ``` **Scenario 2: Environment Sync** ```bash theme={null} # Development database has experimental columns # Production database is several versions behind # The same schema file works for both: # For dev: generates migration to remove experimental columns pgschema plan --host dev.db --file schema.sql # For prod: generates migration to add missing features pgschema plan --host prod.db --file schema.sql ``` This state-based approach provides the flexibility and reliability that modern database operations demand, without the overhead of migration history tracking. ## Concurrent Change Detection The separation between `plan` and `apply` phases creates a critical safety feature but also introduces a potential risk: **what if the database changes between planning and execution?** This time window could lead to applying outdated or conflicting migrations. The tool solves this with **fingerprinting** - a cryptographic mechanism that ensures the exact database state you planned against is the same state you're applying changes to. ### How Fingerprinting Works **During Plan Generation**: When you run `pgschema plan`, it calculates a cryptographic fingerprint of the current database schema state and embeds it in the plan file: ```json theme={null} { "version": "1.0.0", "pgschema_version": "1.0.0", "created_at": "2025-08-12T17:44:43+08:00", "source_fingerprint": { "hash": "965b1131737c955e24c7f827c55bd78e4cb49a75adfd04229e0ba297376f5085" }, "diffs": [...] } ``` **During Apply Execution**: Before executing any changes, `pgschema apply` recalculates the current database fingerprint and compares it with the stored fingerprint from the plan. ### Safety in Action This fingerprinting prevents dangerous scenarios: ```bash theme={null} # Team member A generates plan pgschema plan \ --host prod.db.com \ --db myapp \ --user postgres \ --file schema.sql \ --output-json plan.json # Time passes... Team member B makes emergency schema change # Team member A tries to apply original plan pgschema apply \ --host prod.db.com \ --db myapp \ --user postgres \ --plan plan.json # ❌ Error: schema fingerprint mismatch detected - the database schema has changed since the plan was generated. # # schema fingerprint mismatch - expected: 965b1131737c955e, actual: abc123456789abcd ``` ### Recover Fingerprint Mismatch When a fingerprint mismatch occurs, the right approach is to reestablish the baseline with the current database state: ```bash theme={null} # 1. Get the latest schema from the current database pgschema dump \ --host prod.db.com \ --db myapp \ --user postgres \ > current-schema.sql # 2. Update your desired schema file with any new changes # (merge your intended changes with the current state) # 3. Plan the changes against the updated baseline pgschema plan \ --host prod.db.com \ --db myapp \ --user postgres \ --file updated-schema.sql \ --output-json new-plan.json # 4. Apply with the fresh plan pgschema apply \ --host prod.db.com \ --db myapp \ --user postgres \ --plan new-plan.json ``` This fingerprinting mechanism maintains the integrity of the Plan-Review-Apply workflow, ensuring that concurrent modifications don't lead to unexpected or conflicting database changes. ## Online DDL Operations The tool automatically uses PostgreSQL's non-blocking features to minimize downtime during schema changes, including concurrent index creation, NOT VALID constraint patterns, and safe NOT NULL additions. **Example - Concurrent Index Creation**: Index operations automatically use `CREATE INDEX CONCURRENTLY` to avoid blocking table writes: ```sql theme={null} -- Generated migration for index addition CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_users_email_status ON users (email, status DESC); -- pgschema:wait - Built-in progress monitoring SELECT COALESCE(i.indisvalid, false) as done, CASE WHEN p.blocks_total > 0 THEN p.blocks_done * 100 / p.blocks_total ELSE 0 END as progress FROM pg_class c LEFT JOIN pg_index i ON c.oid = i.indexrelid LEFT JOIN pg_stat_progress_create_index p ON c.oid = p.index_relid WHERE c.relname = 'idx_users_email_status'; ``` The `pgschema:wait` directive blocks migration execution, polls the database to monitor progress, and automatically continues when operations complete. For comprehensive details, see the [Online DDL documentation](/workflow/online-ddl). ## Adaptive Transaction The system automatically determines transaction boundaries based on the SQL operations in your migration plan. Each diff in the plan contains a `can_run_in_transaction` field that indicates whether that particular SQL can run inside a transaction. ```json plan.json snippet theme={null} "diffs": [ ... { "sql": "ALTER TABLE department\nADD CONSTRAINT department_dept_name_key UNIQUE (dept_name);", "type": "table.constraint", "operation": "create", "path": "public.department.department_dept_name_key", "source": { "schema": "public", "table": "department", "name": "department_dept_name_key", "type": "UNIQUE", "columns": [ { "name": "dept_name", "position": 1 } ] }, "can_run_in_transaction": true <--------- transaction indication }, ... ] ``` ### Smart Transaction Wrapping * **If all diffs can run in a transaction**: The entire plan is wrapped in a single transaction for maximum atomicity * **If any diff cannot run in a transaction**: Each diff runs in its own transaction to maintain isolation Operating at the schema level, among all the DDL operations supported, only `CREATE INDEX CONCURRENTLY` cannot run inside a transaction. ## Modular Schema Organization For large applications and teams, managing your database schema as a single monolithic file becomes unwieldy. The [modular](/workflow/modular-schema-files) approach transforms your schema into an organized structure that enables better collaboration, clearer ownership, and easier maintenance. ### Breaking Down the Monolith Instead of a single large schema file, the tool can organize your database objects into logical, manageable files with the [`--multi-file`](/cli/dump#param-multi-file) option: ```bash theme={null} # Initialize modular structure from existing database mkdir -p schema pgschema dump \ --host localhost \ --db myapp \ --user postgres \ --multi-file \ --file schema/main.sql # Examine the generated structure tree schema/ ``` This creates an organized drectory structure. The `main.sql` file serves as the entry point, containing Postgres `\i` (include) directives that reference individual component files. ``` schema/ ├── main.sql # Entry point with include directives ├── types/ │ ├── user_status.sql │ ├── order_status.sql │ └── address.sql ├── domains/ │ ├── email_address.sql │ └── positive_integer.sql ├── sequences/ │ ├── global_id_seq.sql │ └── order_number_seq.sql ├── tables/ │ ├── users.sql │ └── orders.sql ├── functions/ │ ├── update_timestamp.sql │ ├── get_user_count.sql │ └── get_order_count.sql ├── procedures/ │ ├── cleanup_orders.sql │ └── update_status.sql └── views/ ├── user_summary.sql └── order_details.sql ``` ```sql theme={null} -- Include custom types first (dependencies for tables) \i types/user_status.sql \i types/order_status.sql \i types/address.sql -- Include domain types (constrained base types) \i domains/email_address.sql \i domains/positive_integer.sql -- Include sequences (may be used by tables) \i sequences/global_id_seq.sql \i sequences/order_number_seq.sql -- Include core tables (with their constraints, indexes, and policies) \i tables/users.sql \i tables/orders.sql -- Include functions and procedures \i functions/update_timestamp.sql \i functions/get_user_count.sql \i procedures/cleanup_orders.sql -- Include views (depend on tables and functions) \i views/user_summary.sql \i views/order_details.sql ``` ```sql theme={null} -- -- Name: update_timestamp(); Type: FUNCTION; Schema: -; Owner: - -- CREATE OR REPLACE FUNCTION update_timestamp() RETURNS trigger LANGUAGE plpgsql SECURITY INVOKER VOLATILE AS $$ BEGIN NEW.updated_at = CURRENT_TIMESTAMP; RETURN NEW; END; $$; ``` ### Team Collaboration Benefits This modular approach enables powerful team collaboration patterns: **Granular Ownership**: Use GitHub's CODEOWNERS to assign different teams to different parts of your schema: ``` # .github/CODEOWNERS schema/tables/users* @user-team schema/tables/orders* @orders-team schema/tables/products* @inventory-team schema/views/user_* @user-team schema/views/order_* @orders-team schema/functions/*inventory* @inventory-team ``` **Independent Development**: Teams can work on separate schema components without conflicts, test their changes in isolation, then combine for coordinated production deployments. **Focused Code Reviews**: Instead of reviewing massive schema changes, reviewers can focus on specific files relevant to their domain expertise. ### Same Workflow, Better Organization The declarative workflow remains identical - you're still defining desired state and letting the tool generate the migration plan: ```bash theme={null} # Teams edit their respective schema files to define desired state # Example: Add new column to users table vim schema/tables/users.sql # Generate migration plan from modular schema pgschema plan \ --host localhost \ --db myapp \ --user postgres \ --file schema/main.sql \ --output-human \ --output-json plan.json \ --output-sql plan.sql # Apply the coordinated changes pgschema apply \ --host localhost \ --db myapp \ --user postgres \ --plan plan.json ``` The multi-file approach scales from small teams working on focused components to large organizations managing complex schemas with hundreds of objects, while maintaining the same simple, declarative workflow. ## Dependency Management As schemas grow in complexity, manually ordering DDL statements becomes increasingly error-prone. A single schema file with hundreds of objects becomes unmanageable—you're constantly scrolling and reordering statements to satisfy dependencies. The modular schema approach introduces even greater challenges. With schema files spread across directories, tracking dependencies between objects becomes nearly impossible. A trigger in `triggers/audit.sql` might depend on a function in `functions/timestamps.sql`, which itself references types defined in `types/custom.sql`. Database objects often depend on each other—triggers need functions, views reference tables, foreign keys require referenced tables to exist first. The tool automatically handles these dependencies using topological sorting to ensure operations execute in the correct order, regardless of how your schema files are organized. ### Automatic Dependency Resolution Consider this modular schema example where you're adding a table with a trigger that depends on a function ```sql theme={null} -- tables/users.sql CREATE TABLE public.users ( id serial PRIMARY KEY, name text NOT NULL, email text UNIQUE, created_at timestamp DEFAULT CURRENT_TIMESTAMP, updated_at timestamp DEFAULT CURRENT_TIMESTAMP ); -- triggers/update_users_modified_time.sql CREATE TRIGGER update_users_modified_time BEFORE UPDATE ON public.users FOR EACH ROW EXECUTE FUNCTION public.update_modified_time(); -- functions/update_modified_time.sql CREATE OR REPLACE FUNCTION public.update_modified_time() RETURNS trigger AS $$ BEGIN NEW.updated_at = CURRENT_TIMESTAMP; RETURN NEW; END; $$ LANGUAGE plpgsql; ``` ```sql theme={null} -- pgschema automatically orders operations correctly: CREATE TABLE IF NOT EXISTS users ( id SERIAL PRIMARY KEY, name text NOT NULL, email text UNIQUE, created_at timestamp DEFAULT CURRENT_TIMESTAMP, updated_at timestamp DEFAULT CURRENT_TIMESTAMP ); CREATE OR REPLACE FUNCTION update_modified_time() RETURNS trigger LANGUAGE plpgsql SECURITY INVOKER VOLATILE AS $$ BEGIN NEW.updated_at = CURRENT_TIMESTAMP; RETURN NEW; END; $$; CREATE OR REPLACE TRIGGER update_users_modified_time BEFORE UPDATE ON users FOR EACH ROW EXECUTE FUNCTION update_modified_time(); ``` The topological sort algorithm ensures that: * Tables are created before their indexes, constraints, and triggers * Functions are created before triggers that use them * Referenced tables exist before foreign key constraints * Views are created after all their dependent tables and functions This automatic dependency management eliminates manual ordering errors and ensures migrations always execute successfully. ## No Shadow Database Required Unlike many declarative schema migration tools, this approach doesn't require a separate "shadow" or "dev" database to compute schema differences. Other tools typically create a temporary database, apply your desired schema to it, then compare this shadow database with your target database to generate migrations. This shadow database approach has several drawbacks: * **Counterintuitive Configuration**: Most developers stumble on the additional shadow database flags and need to read documentation to understand what they're used for * **Complex Setup**: Requires permissions for temporary database creation and cleanup of temporary databases * **Additional Infrastructure**: Demands extra database resources, maintenance, and can cause inconsistencies with different Postgres versions or extensions ### IR Normalization This tool eliminates these complexities by working directly with your schema files and target database through an **Intermediate Representation (IR)** system: ```plain theme={null} SQL Files Database System Catalogs v v (Desired) (Current) | | | Parse | Query v v IR (Normalized) | v Diff Engine | v Migration Plan ``` **1. SQL Parsing**: Your desired state SQL files are parsed and converted into a normalized Intermediate Representation. **2. Database Introspection**: The target database schema is introspected through Postgres system catalogs (`pg_class`, `pg_attribute`, `pg_constraint`, etc.) and converted into the same normalized IR format. **3. Direct Comparison**: The diff engine compares these two normalized IR structures directly, identifying what changes are needed without requiring any temporary databases. **4. Migration Generation**: The differences are converted into properly ordered DDL statements that transform the current state into the desired state. The IR normalization ensures that regardless of whether schema information comes from SQL parsing or database introspection, it's represented consistently for accurate comparison - enabling precise migrations without the overhead and complexity of shadow databases. ## Acknowledgements `pgschema` wouldn't exist without these amazing open source projects: * [sqlc](https://github.com/sqlc-dev/sqlc) - For generating code to extract schema from target databases * [pg\_query\_go](https://github.com/pganalyze/pg_query_go) - Go bindings for libpg\_query, enabling reliable Postgres SQL parsing. In fact, sqlc also uses pg\_query\_go under the hood! It's the secret weapon in the Postgres ecosystem that allows tools to build near-perfect Postgres compatibility. * [testcontainers](https://github.com/testcontainers/) - Enabling comprehensive integration testing with real Postgres instances **Design reference** came from: * [Terraform](https://github.com/hashicorp/terraform) - For figuring out the right Plan-Review-Apply workflow that developers love * [pg-schema-diff](https://github.com/stripe/pg-schema-diff) - Reading GitHub issues to understand use cases and studying the codebase. For example, I knew sqlc beforehand, but when I saw pg-schema-diff uses sqlc to fetch schema information, it immediately clicked. And the real inspiration came from hundreds of customer conversations over 4+ years building [Bytebase](https://www.bytebase.com). Developers kept telling us: "Why can't database migrations be as simple as Terraform?" Well, now they can be. Special thanks to Claude Code for sweating millions of tokens to make this project possible. Turns out AI is pretty good at writing database migration tools – who knew? 🤖 *** Ready to give it a try? 👉👉👉[Installation](/installation) # Supabase Declarative Schema vs pgschema Source: https://www.pgschema.com/blog/supabase-declarative-schema-vs-pgschema Comparing two declarative PostgreSQL schema tools: workflow differences, database object support, online DDL, CI/CD integration, and file organization. Managing PostgreSQL schemas with imperative migrations - writing `ALTER TABLE` statements by hand, tracking which migrations ran where - doesn't scale. Declarative schema management flips this: you define the desired state, and the tool figures out how to get there. It's the Terraform approach for databases. Both Supabase and pgschema offer this declarative workflow for PostgreSQL. Recently, we've seen users [migrating from Supabase's declarative schemas to pgschema](https://github.com/pgplex/pgschema/issues/227). This article compares both tools to help you understand when each makes sense. ## Workflow Comparison ```mermaid theme={null} flowchart TB A[Edit schema files] --> B[supabase db diff] B --> C[Review migration file] C --> D[supabase db push] D --> E[Migration history updated] E --> F[Database updated] ``` ```mermaid theme={null} flowchart TB A[Edit schema files] --> B[pgschema plan] B --> C[Review plan] C --> D[pgschema apply] D --> E[Database updated] ``` ### Supabase Supabase maintains two directories and a database table: * `supabase/schemas/` - Your desired state (what you edit) * `supabase/migrations/` - Generated migration files (timestamped SQL) * `supabase_migrations.schema_migrations` - Table tracking which migrations have been applied When you run `supabase db diff`, it compares your schema files against the cumulative result of existing migrations and generates a new timestamped migration file. **Adding a column:** ```sql theme={null} -- supabase/schemas/users.sql CREATE TABLE users ( id SERIAL PRIMARY KEY, name TEXT NOT NULL, phone TEXT -- add this line ); ``` ```bash theme={null} # Generate migration from schema diff supabase db diff -f add_phone_column # Creates: supabase/migrations/20240115120000_add_phone_column.sql # Apply locally supabase migration up # Push to remote supabase db push ``` **Rolling back:** Supabase uses versioned migrations. To rollback, you reset to a previous migration timestamp: ```bash theme={null} supabase db reset --version 20240101000000 ``` This replays all migrations up to that version. The migration history table tracks what's been applied. ### pgschema pgschema has a single source of truth: * Your schema SQL file(s) - that's it * No migration history table * No migrations directory Each `plan` command does a fresh comparison: desired state (your SQL files) vs current state (live database). **Adding a column:** ```sql theme={null} -- schema.sql CREATE TABLE users ( id SERIAL PRIMARY KEY, name TEXT NOT NULL, phone TEXT -- add this line ); ``` ```bash theme={null} # See the migration plan pgschema plan --host localhost --db myapp --user postgres --file schema.sql # Save the plan for review and apply pgschema plan --host localhost --db myapp --user postgres --file schema.sql --output-json plan.json # Apply the reviewed plan pgschema apply --host localhost --db myapp --user postgres --plan plan.json ``` **Rolling back:** There's no migration history. To rollback, edit your schema file to the desired state and apply: ```sql theme={null} -- schema.sql - remove the phone column CREATE TABLE users ( id SERIAL PRIMARY KEY, name TEXT NOT NULL ); ``` ```bash theme={null} pgschema plan --host localhost --db myapp --user postgres --file schema.sql --output-json plan.json # Shows: ALTER TABLE users DROP COLUMN phone; pgschema apply --host localhost --db myapp --user postgres --plan plan.json ``` Rollback is just another state change. No timestamps, no migration history to manage. ### Concurrent Change Detection in CI/CD Both tools decouple migration generation from execution. Both produce migration SQL. But pgschema's JSON plan format encodes more than just the SQL statements - it includes a [fingerprint](/workflow/plan-review-apply#detect-concurrent-changes-with-fingerprinting) of the database schema at plan time: ```json theme={null} { "source_fingerprint": { "hash": "965b1131737c955e24c7f827c55bd78e4cb49a75adfd04229e0ba297376f5085" }, "statements": [...] } ``` This matters in CI/CD pipelines. A typical workflow: 1. PR is opened, CI runs `pgschema plan` and saves `plan.json` as an artifact 2. Team reviews the plan in the PR 3. PR is merged, CD runs `pgschema apply --plan plan.json` But what if another PR merged and applied schema changes between steps 1 and 3? With Supabase, you might apply a stale migration that conflicts with changes made by another PR. With pgschema, the apply fails: ```bash theme={null} pgschema apply --host prod.db.com --db myapp --user postgres --plan plan.json # ❌ Error: schema fingerprint mismatch detected - the database schema has changed since the plan was generated. ``` The CI pipeline catches the conflict. You regenerate the plan, review again, and apply. No silent schema drift. ## Database Object Support Supabase uses [migra](https://github.com/djrobstep/migra) for schema diffing, which has [documented limitations](https://supabase.com/docs/guides/local-development/declarative-database-schemas#known-caveats). pgschema also has [limitations](/syntax/unsupported), but they're fundamentally different. **Supabase/migra cannot track these schema-level objects:** * Row-level security (RLS) policies * Materialized views * Comments * Partitions * Column privileges * View ownership and grants * Domain statements For these, Supabase recommends falling back to versioned migrations - defeating the purpose of declarative schemas. **pgschema supports all of the above.** Its limitations are at cluster and database levels - objects typically provisioned once during setup rather than changed frequently: * Cluster level: `CREATE DATABASE`, `CREATE ROLE`, `CREATE TABLESPACE` * Database level: `CREATE EXTENSION`, `CREATE SCHEMA`, `CREATE PUBLICATION` * `RENAME` operations (use DROP + CREATE instead) These objects are typically managed separately through infrastructure tools, cloud consoles, or initial setup scripts. | Object Type | Supabase/migra | pgschema | | ---------------------------- | -------------- | -------- | | Tables, views, functions | ✅ | ✅ | | RLS policies | ❌ | ✅ | | Materialized views | ❌ | ✅ | | Triggers | ❌ | ✅ | | Procedures | ❌ | ✅ | | Partitions | ❌ | ✅ | | Comments | ❌ | ✅ | | Column privileges | ❌ | ✅ | | Domain types | ❌ | ✅ | | CREATE INDEX CONCURRENTLY | ❌ | ✅ | | ADD CONSTRAINT ... NOT VALID | ❌ | ✅ | The last two rows matter for production. pgschema automatically rewrites your declarative schema changes into [online DDL](/workflow/online-ddl) - you write a normal `CREATE INDEX`, pgschema generates `CREATE INDEX CONCURRENTLY`. You add a foreign key constraint, pgschema generates `ADD CONSTRAINT ... NOT VALID` followed by `VALIDATE CONSTRAINT`. No manual intervention needed. ## Organizing Schema Files Both tools support splitting schemas into multiple files, but with different approaches. ### Multi-file Structure **Supabase** processes files in `supabase/schemas/` in lexicographic order. For custom ordering, you configure `config.toml`: ```toml theme={null} [db.migrations] schema_paths = [ "./schemas/employees.sql", "./schemas/*.sql", ] ``` **pgschema** uses PostgreSQL's native `\i` (include) directive. You can dump an existing database into a [modular structure](/workflow/modular-schema-files): ```bash theme={null} pgschema dump --host localhost --db myapp --user postgres \ --multi-file --file schema/main.sql ``` This generates: ``` schema/ ├── main.sql # Entry point with \i directives ├── tables/ │ ├── users.sql │ └── orders.sql ├── views/ ├── functions/ └── indexes/ ``` You can include individual files or entire folders: ```sql theme={null} \i tables/users.sql \i views/ \i functions/ ``` The key difference: pgschema uses PostgreSQL's native `\i` directive, so your schema files work directly with `psql` - no tool-specific configuration needed. ### Ignoring Objects **Supabase** has a `--schema` flag to include/exclude entire schemas. By default, `auth` and `storage` schemas are excluded from diffs since they're managed by Supabase. You can diff specific schemas with: ```bash theme={null} supabase db diff --schema public,custom_schema ``` However, there's no way to ignore individual tables or objects within a schema. **pgschema** supports a [`.pgschemaignore`](/cli/ignore) file with pattern matching: ```toml theme={null} [tables] patterns = ["temp_*", "test_*", "!test_core_*"] [views] patterns = ["debug_*", "analytics_*"] [functions] patterns = ["fn_test_*"] ``` This is useful for: * **Gradual adoption** - onboard tables incrementally without managing everything at once * **Temporary objects** - exclude debug views, temp tables, development-only objects * **Legacy objects** - ignore deprecated tables while managing new ones ## Summary | Aspect | Supabase | pgschema | | --------------------------- | ----------------------------------------------- | --------------------------------------------------------- | | **License** | Apache 2.0 | Apache 2.0 | | **Workflow** | Schema files → migrations → history table | Schema files → plan → apply (git history, fingerprinting) | | **Database object support** | Limited by migra (no RLS, triggers, partitions) | Comprehensive schema-level support | | **Online DDL** | Manual | Auto-generates CONCURRENTLY, NOT VALID | | **Multi-file schemas** | config.toml with glob patterns | PostgreSQL-native `\i` directive | | **Ignore patterns** | Schema-level only | Granular `.pgschemaignore` with wildcards | | **Database compatibility** | Supabase-hosted PostgreSQL | Any PostgreSQL database | ## Conclusion Both tools embrace declarative schema management - define your desired state, let the tool generate the migration. This approach reduces manual SQL writing and eliminates migration ordering headaches. **Choose Supabase** if you're fully invested in the Supabase ecosystem (Auth, Storage, Edge Functions) and your schema uses basic objects. The integrated tooling provides a cohesive experience. **Choose pgschema** if you need broader PostgreSQL support (RLS, triggers, partitions), online DDL for production safety, or database portability across AWS RDS, Cloud SQL, self-hosted, or Supabase. Both tools are open source. Try them on a test database and see which workflow fits your team. For those who are considering migrating from Supabase declarative schema to pgschema, you can check out the experience/gaps from the [community discussion](https://github.com/pgplex/pgschema/issues/227). # Apply Source: https://www.pgschema.com/cli/apply The `apply` command applies database schema changes to a target database schema. You can either provide a desired state file to generate and apply a plan, or execute a pre-generated plan file directly. ## Overview The apply command supports two execution modes: ### File Mode (Generate and Apply) 1. Read the desired state from a SQL file (with include directive support) 2. Apply the desired state SQL to a temporary PostgreSQL instance (embedded by default, or external via `--plan-*` flags) 3. Compare it with the current database state of the target schema 4. Generate a migration plan with proper dependency ordering 5. Display the plan for review 6. Apply the changes (with optional confirmation and safety checks) By default, File Mode uses an embedded PostgreSQL instance to validate the desired state. For schemas using PostgreSQL extensions or cross-schema references, you can use an external database instead via the `--plan-*` flags. See [External Plan Database](/cli/plan-db) for details. ### Plan Mode (Execute Pre-generated Plan) 1. Load a pre-generated plan from JSON file 2. Validate plan version compatibility and schema fingerprints 3. Display the plan for review 4. Apply the changes (with optional confirmation and safety checks) ## Basic Usage ```bash theme={null} # File mode: Generate and apply plan from desired state file pgschema apply --host localhost --db myapp --user postgres --password mypassword --file schema.sql # Plan mode: Execute pre-generated plan pgschema apply --host localhost --db myapp --user postgres --password mypassword --plan plan.json # Auto-approve mode for CI/CD (no confirmation prompt) pgschema apply --host localhost --db myapp --user postgres --password mypassword --file schema.sql --auto-approve ``` ## Connection Options Database server host (env: PGHOST) Database server port (env: PGPORT) Database name (required) (env: PGDATABASE) Database user name (required) (env: PGUSER) Database password (optional, can also use PGPASSWORD env var or .pgpass file) You can provide the password in multiple ways (in order of precedence): ```bash .env File (Recommended) theme={null} # Create .env file with: # PGHOST=localhost # PGPORT=5432 # PGDATABASE=myapp # PGUSER=postgres # PGPASSWORD=mypassword pgschema apply --file schema.sql ``` ```bash Environment Variable theme={null} PGPASSWORD=mypassword pgschema apply \ --host localhost \ --db myapp \ --user postgres \ --file schema.sql ``` ```bash Command Line Flag theme={null} pgschema apply \ --host localhost \ --db myapp \ --user postgres \ --password mypassword \ --file schema.sql ``` ```bash .pgpass File theme={null} # Create ~/.pgpass file with: # hostname:port:database:username:password # localhost:5432:myapp:postgres:mypassword # Make sure file has correct permissions chmod 600 ~/.pgpass pgschema apply \ --host localhost \ --db myapp \ --user postgres \ --file schema.sql ``` **Password Resolution Order:** 1. Command line `--password` flag (highest priority) 2. `PGPASSWORD` environment variable 3. `.pgpass` file in user's home directory 4. PostgreSQL will prompt for password if none found See [dotenv (.env)](/cli/dotenv) for detailed configuration options. SSL mode for database connection (env: PGSSLMODE) Valid values: `disable`, `allow`, `prefer`, `require`, `verify-ca`, `verify-full` For `verify-ca` and `verify-full` modes, you can configure certificate paths using standard PostgreSQL environment variables (`PGSSLROOTCERT`, `PGSSLCERT`, `PGSSLKEY`). Schema name to apply changes to ## Plan Database Options When using File Mode (`--file`), the apply command generates a plan internally using a temporary PostgreSQL instance. By default, this uses embedded PostgreSQL. For schemas that require PostgreSQL extensions or have cross-schema references, you can provide an external database. See [External Plan Database](/cli/plan-db) for complete documentation. **Note**: These options only apply when using `--file` mode. When using `--plan` mode, the plan has already been generated. ## Apply Options Path to desired state SQL schema file (mutually exclusive with --plan) Used in File Mode to generate and apply a plan from the desired state. Path to pre-generated plan JSON file (mutually exclusive with --file) Used in Plan Mode to execute a plan that was previously generated with `pgschema plan --output-json`. Apply changes without prompting for approval Useful for automated deployments and CI/CD pipelines. Disable colored output in the plan display Useful for scripts, CI/CD environments, or terminals that don't support colors. Maximum time to wait for database locks (e.g., '30s', '5m', '1h') If not specified, uses PostgreSQL's default behavior (wait indefinitely). See [PostgreSQL lock\_timeout documentation](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-LOCK-TIMEOUT). Application name for database connection (visible in pg\_stat\_activity) (env: PGAPPNAME) See [PostgreSQL application\_name documentation](https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNECT-APPLICATION-NAME). ## Ignoring Objects You can exclude specific database objects from schema application using a `.pgschemaignore` file. See [Ignore (.pgschemaignore)](/cli/ignore) for complete documentation. ## Examples ### File Mode (Generate and Apply) ```bash theme={null} # Interactive mode - shows plan and prompts for confirmation pgschema apply --host localhost --db myapp --user postgres --file desired_schema.sql ``` This will: 1. Generate a migration plan by comparing the desired state with current database 2. Display the plan with colored output 3. Prompt: "Do you want to apply these changes? (yes/no):" 4. Wait for confirmation before proceeding 5. Apply changes using transactions where possible Example output: ``` Plan: 2 to add, 1 to modify. Summary by type: tables: 1 to add, 1 to modify indexes: 1 to add Tables: + posts ~ users + email (column) Indexes: + idx_users_email Transaction: true DDL to be executed: -------------------------------------------------- CREATE TABLE IF NOT EXISTS posts ( id SERIAL PRIMARY KEY, title TEXT NOT NULL, user_id INTEGER REFERENCES users(id) ); ALTER TABLE users ADD COLUMN email VARCHAR(255) UNIQUE; CREATE INDEX idx_users_email ON users(email); Do you want to apply these changes? (yes/no): ``` ### Plan Mode (Execute Pre-generated Plan) ```bash theme={null} # First, generate a plan pgschema plan --host localhost --db myapp --user postgres --file schema.sql --output-json plan.json # Then apply the pre-generated plan pgschema apply --host localhost --db myapp --user postgres --plan plan.json ``` Benefits of plan mode: * **Separation of concerns**: Generate plans in one environment, apply in another * **Review process**: Plans can be reviewed before deployment * **Repeatability**: Same plan can be applied to multiple environments * **Version control**: Plans can be stored and versioned * **Rollback preparation**: Generate rollback plans before applying changes ### Auto-approve for CI/CD ```bash theme={null} pgschema apply \ --host prod-db \ --db myapp \ --user deployer \ --file schema.sql \ --auto-approve \ --no-color ``` ### With Lock Timeout ```bash theme={null} # Prevent blocking on long-running locks pgschema apply \ --host localhost \ --db myapp \ --user postgres \ --file schema.sql \ --lock-timeout "30s" ``` ### Custom Application Name ```bash theme={null} pgschema apply \ --host localhost \ --db myapp \ --user postgres \ --file schema.sql \ --application-name "pgschema-deployment-v1.1.0" ``` ```sql theme={null} -- Monitor active pgschema connections: SELECT application_name, state, query_start, query FROM pg_stat_activity WHERE application_name LIKE 'pgschema%'; ``` ## Safety Features ### Schema Fingerprint Validation When using plan mode, pgschema validates that the database schema hasn't changed since the plan was generated: ```bash theme={null} # Generate plan pgschema plan --host localhost --db myapp --user postgres --file schema.sql --output-json plan.json # Database schema changes here... # Apply plan - will detect changes and fail pgschema apply --host localhost --db myapp --user postgres --plan plan.json ``` If schema changes are detected: ``` Error: schema fingerprint mismatch detected - the database schema has changed since the plan was generated. Expected fingerprint: abc123... Current fingerprint: def456... Difference: Table 'users' was modified To resolve this issue: 1. Regenerate the plan with current database state: pgschema plan ... 2. Review the new plan to ensure it's still correct 3. Apply the new plan: pgschema apply ... ``` ### Version Compatibility Plans include version information to ensure compatibility: * **pgschema version**: Must match the version used to generate the plan * **Plan format version**: Must be supported by the current pgschema version ### Transaction Handling pgschema automatically determines whether changes can run in a transaction: * **Transactional mode** (default): All changes run in a single transaction with automatic rollback on failure * **Non-transactional mode**: Some operations (like `CREATE INDEX CONCURRENTLY`) run outside transactions ``` Transaction: true # Changes will run in a transaction Transaction: false # Some changes cannot run in a transaction ``` ### No-op Detection If no changes are needed, pgschema skips execution: ``` No changes to apply. Database schema is already up to date. ``` # dotenv (.env) Source: https://www.pgschema.com/cli/dotenv `pgschema` supports loading configuration from environment variables and dotenv `.env` files. ## Overview pgschema automatically loads configuration from: 1. `.env` file in the current directory (if present) 2. System environment variables 3. Command-line flags (highest priority) The precedence order is: **CLI flags > environment variables > defaults** ## Supported Environment Variables pgschema supports all standard PostgreSQL environment variables: Database server host Database server port Database name (required for all commands) Database user name (required for all commands) Database password SSL mode for database connection. Valid values: `disable`, `allow`, `prefer`, `require`, `verify-ca`, `verify-full` Application name visible in `pg_stat_activity` ## .env Setup Create a `.env` file in your project directory: ```bash theme={null} # Database connection settings PGHOST=localhost PGPORT=5432 PGDATABASE=myapp PGUSER=postgres PGPASSWORD=secretpassword # Optional: SSL mode (disable, allow, prefer, require, verify-ca, verify-full) PGSSLMODE=prefer # Optional: Custom application name PGAPPNAME=pgschema ``` **Security Note:** Add `.env` to your `.gitignore` file to prevent committing sensitive credentials: ```gitignore theme={null} # Environment files .env .env.local .env.*.local ``` # Dump Source: https://www.pgschema.com/cli/dump The `dump` command extracts a PostgreSQL database schema for a specific schema and outputs it in a developer-friendly format. The dumped schema serves as a baseline that developers can modify and apply to target databases using the `plan` and `apply` commands. ## Overview The dump command provides comprehensive schema extraction with: 1. Single schema targeting (defaults to 'public') 2. Dependency-aware object ordering 3. Cross-schema reference handling with smart qualification 4. Developer-friendly SQL output format 5. Single-file and multi-file organization options ## Basic Usage ```bash theme={null} # Dump default schema (public) pgschema dump --host localhost --db myapp --user postgres --password mypassword # Dump specific schema pgschema dump --host localhost --db myapp --user postgres --password mypassword --schema myschema # Save to file pgschema dump --host localhost --db myapp --user postgres --password mypassword > schema.sql # Multi-file organized output pgschema dump --host localhost --db myapp --user postgres --password mypassword --multi-file --file schema.sql ``` ## Integration with Plan/Apply ```bash theme={null} # 1. Dump current production schema pgschema dump --host prod-host --db myapp --user postgres --schema public > current.sql # 2. Make modifications to current.sql (now desired state) # 3. Plan changes against staging pgschema plan --host staging-host --db myapp --user postgres --file current.sql # 4. Apply changes if plan looks good pgschema apply --host staging-host --db myapp --user postgres --file current.sql ``` ## Connection Options Database server host (env: PGHOST) Database server port (env: PGPORT) Database name (required) (env: PGDATABASE) Database user name (required) (env: PGUSER) Database password (optional, can also use PGPASSWORD env var or .pgpass file) You can provide the password in multiple ways (in order of precedence): ```bash .env File (Recommended) theme={null} # Create .env file with: # PGHOST=localhost # PGPORT=5432 # PGDATABASE=myapp # PGUSER=postgres # PGPASSWORD=mypassword pgschema dump ``` ```bash Environment Variable theme={null} PGPASSWORD=mypassword pgschema dump \ --host localhost \ --db myapp \ --user postgres ``` ```bash Command Line Flag theme={null} pgschema dump \ --host localhost \ --db myapp \ --user postgres \ --password mypassword ``` ```bash .pgpass File theme={null} # Create ~/.pgpass file with: # hostname:port:database:username:password # localhost:5432:myapp:postgres:mypassword # Make sure file has correct permissions chmod 600 ~/.pgpass pgschema dump \ --host localhost \ --db myapp \ --user postgres ``` **Password Resolution Order:** 1. Command line `--password` flag (highest priority) 2. `PGPASSWORD` environment variable 3. `.pgpass` file in user's home directory 4. PostgreSQL will prompt for password if none found See [dotenv (.env)](/cli/dotenv) for detailed configuration options. SSL mode for database connection (env: PGSSLMODE) Valid values: `disable`, `allow`, `prefer`, `require`, `verify-ca`, `verify-full` For `verify-ca` and `verify-full` modes, you can configure certificate paths using standard PostgreSQL environment variables (`PGSSLROOTCERT`, `PGSSLCERT`, `PGSSLKEY`). Schema name to dump ## Output Options Output schema to multiple files organized by object type. See [Multi-file Schema Management Workflow](/workflow/multi-file-schema). When enabled, creates a structured directory with: * Main file with header and include statements * Separate directories for different object types (tables, views, functions, etc.) * Each database object in its own file Requires `--file` to specify the main output file path. Output file path (required when --multi-file is used) For single-file mode, this is optional (defaults to stdout). For multi-file mode, this specifies the main file path. Do not output object comment headers (e.g., `-- Name: users; Type: TABLE; Schema: -; Owner: -`). The dump header with pgschema version information is retained. This option is useful when you need pure DDL output without per-object commentary. ## Ignoring Objects You can exclude specific database objects from dumps using a `.pgschemaignore` file. See [Ignore (.pgschemaignore)](/cli/ignore) for complete documentation. ## Examples ### Schema Dump ```bash theme={null} # Dump default schema (public) pgschema dump --host localhost --db myapp --user postgres # Dump specific schema pgschema dump --host localhost --db myapp --user postgres --schema analytics ``` Example output: ```sql theme={null} -- -- pgschema database dump -- -- Dumped from database version PostgreSQL 17.5 -- Dumped by pgschema version 1.0.0 -- -- Name: users; Type: TABLE; Schema: -; Owner: - -- CREATE TABLE IF NOT EXISTS users ( id SERIAL PRIMARY KEY, name VARCHAR(255) NOT NULL, email VARCHAR(255) UNIQUE, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ); -- -- Name: idx_users_email; Type: INDEX; Schema: -; Owner: - -- CREATE INDEX IF NOT EXISTS idx_users_email ON users(email); ``` ### Multi-File Output ```bash theme={null} pgschema dump \ --host localhost \ --db myapp \ --user postgres \ --multi-file \ --file schema.sql ``` This creates a structured directory layout: ``` schema.sql # Main file with header and includes ├── tables/ │ ├── users.sql │ ├── orders.sql │ └── products.sql ├── views/ │ └── user_stats.sql ├── functions/ │ └── update_timestamp.sql └── procedures/ └── cleanup_old_data.sql ``` The main `schema.sql` file contains: ```sql theme={null} -- -- pgschema database dump -- -- Dumped from database version PostgreSQL 17.5 -- Dumped by pgschema version 1.0.0 \i tables/users.sql \i tables/orders.sql \i tables/products.sql \i views/user_stats.sql \i functions/update_timestamp.sql \i procedures/cleanup_old_data.sql ``` Each individual file (e.g., `tables/users.sql`) contains the specific object definition: ```sql theme={null} -- -- Name: users; Type: TABLE; Schema: -; Owner: - -- CREATE TABLE IF NOT EXISTS users ( id SERIAL PRIMARY KEY, name VARCHAR(255) NOT NULL, email VARCHAR(255) UNIQUE, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ); -- -- Name: idx_users_email; Type: INDEX; Schema: -; Owner: - -- CREATE INDEX IF NOT EXISTS idx_users_email ON users(email); -- -- Name: users; Type: RLS; Schema: -; Owner: - -- ALTER TABLE users ENABLE ROW LEVEL SECURITY; ``` ## Schema Qualification `pgschema` uses smart schema qualification to make dumps portable: * **Objects within the dumped schema**: No schema qualifier added * **Objects from other schemas**: Fully qualified with schema name This approach makes the dump suitable as a baseline that can be applied to different schemas, particularly useful for multi-tenant applications. ```bash theme={null} # Dump the 'public' schema pgschema dump --host localhost --db myapp --user postgres --schema public ``` Output for objects within 'public' schema (no qualification): ```sql theme={null} -- Objects in the dumped schema have no qualifier CREATE TABLE users ( id SERIAL PRIMARY KEY, name VARCHAR(255) NOT NULL ); CREATE TABLE orders ( id SERIAL PRIMARY KEY, user_id INTEGER REFERENCES users(id), -- Same schema, no qualifier product_id INTEGER REFERENCES catalog.products(id) -- Different schema, qualified ); ``` This qualification strategy enables using one schema as a template for multiple tenants: ```bash theme={null} # 1. Dump the template schema pgschema dump --host localhost --db myapp --user postgres --schema template > template.sql # 2. Apply to different tenant schemas pgschema apply --host localhost --db myapp --user postgres --schema tenant1 --file template.sql pgschema apply --host localhost --db myapp --user postgres --schema tenant2 --file template.sql pgschema apply --host localhost --db myapp --user postgres --schema tenant3 --file template.sql ``` Because objects within the schema are not qualified, they will be created in whichever schema you specify during the apply command. # Ignore (.pgschemaignore) Source: https://www.pgschema.com/cli/ignore `pgschema` supports ignoring specific database objects using a `.pgschemaignore` file, enabling gradual onboarding and selective schema management. ## Overview The `.pgschemaignore` file allows you to exclude database objects from pgschema operations. This is particularly useful when: 1. **Gradual Migration** - Incrementally adopt pgschema without managing all existing objects 2. **Temporary Objects** - Exclude temp tables, debug views, and development-only objects 3. **Legacy Objects** - Ignore deprecated objects while maintaining new schema management 4. **Environment-Specific Objects** - Skip objects that exist only in certain environments 5. **Role-Specific Privileges** - Ignore grants to roles that don't exist in the plan database ## File Format The `.pgschemaignore` file is automatically loaded when present in the current directory: Create a `.pgschemaignore` file in your project directory using TOML format: ```toml theme={null} [tables] patterns = ["temp_*", "test_*", "!test_core_*"] [views] patterns = ["debug_*", "*_view_tmp", "analytics_*"] [functions] patterns = ["fn_test_*", "fn_debug_*"] [procedures] patterns = ["sp_temp_*", "sp_legacy_*"] [types] patterns = ["type_test_*"] [sequences] patterns = ["seq_temp_*", "seq_debug_*"] [privileges] patterns = ["deploy_bot", "admin_*"] [default_privileges] patterns = ["deploy_bot"] ``` ## Pattern Syntax ### Wildcard Patterns Use `*` to match any sequence of characters: ```toml theme={null} [tables] patterns = [ "temp_*", # Matches: temp_backup, temp_cache, temp_session "*_backup", # Matches: users_backup, orders_backup "test_*_data" # Matches: test_user_data, test_order_data ] ``` ### Exact Patterns Specify exact object names without wildcards: ```toml theme={null} [tables] patterns = ["legacy_table", "deprecated_users", "old_audit"] ``` ### Negation Patterns Use `!` prefix to exclude objects from broader patterns: ```toml theme={null} [tables] patterns = [ "test_*", # Ignore all test_ tables "!test_core_*" # But keep test_core_ tables ] ``` This will ignore `test_data`, `test_results` but keep `test_core_config`, `test_core_settings`. ## Privileges The `[privileges]` and `[default_privileges]` sections filter GRANT statements by **grantee role name**. This is useful when running `pgschema plan` with roles that don't exist in the plan database, or managing migrations across environments with different role configurations. ```toml theme={null} [privileges] patterns = [ "deploy_bot", # Ignore all grants to deploy_bot "admin_*", # Ignore grants to any admin_* role "!admin_super" # But keep grants to admin_super ] [default_privileges] patterns = ["deploy_bot"] # Ignore ALTER DEFAULT PRIVILEGES for deploy_bot ``` The `[privileges]` section filters explicit grants (`GRANT ... TO role`), including column-level privileges. The `[default_privileges]` section filters `ALTER DEFAULT PRIVILEGES` statements. ## Triggers on Ignored Tables Triggers can be defined on ignored tables. The table structure is not managed, but the trigger itself is. ```toml theme={null} # .pgschemaignore [tables] patterns = ["external_*"] ``` ```sql theme={null} -- schema.sql CREATE TRIGGER on_data_change AFTER INSERT ON external_users FOR EACH ROW EXECUTE FUNCTION sync_data(); ``` The trigger will be managed while `external_users` table structure remains unmanaged. # Plan Source: https://www.pgschema.com/cli/plan The `plan` command generates a migration plan to apply a desired schema state to a target database schema. It compares the desired state (from a file) with the current state of a specific schema and shows what changes would be applied. ## Overview The plan command follows infrastructure-as-code principles similar to Terraform: 1. Read the desired state from a SQL file (with include directive support) 2. Apply the desired state SQL to a temporary PostgreSQL instance (embedded by default, or external via `--plan-*` flags) 3. Connect to the target database and analyze current state of the specified schema 4. Compare the two states 5. Generate a detailed migration plan with proper dependency ordering 6. Display the plan without making any changes By default, pgschema uses an embedded PostgreSQL instance to validate your desired state SQL. For schemas using PostgreSQL extensions or cross-schema references, you can use an external database instead. See [External Plan Database](/cli/plan-db) for details. ## Basic Usage ```bash theme={null} # Generate plan to apply schema.sql to the target database pgschema plan --host localhost --db myapp --user postgres --password mypassword --file schema.sql # Plan with specific schema pgschema plan --host localhost --db myapp --user postgres --password mypassword --schema myschema --file schema.sql # Generate JSON output for automation pgschema plan --host localhost --db myapp --user postgres --password mypassword --file schema.sql --output-json stdout # Generate SQL migration script pgschema plan --host localhost --db myapp --user postgres --password mypassword --file schema.sql --output-sql stdout # Save multiple output formats simultaneously pgschema plan --host localhost --db myapp --user postgres --password mypassword --file schema.sql \ --output-human plan.txt --output-json plan.json --output-sql migration.sql # Disable colored output (useful for scripts or CI/CD) pgschema plan --host localhost --db myapp --user postgres --password mypassword --file schema.sql --no-color ``` ## Connection Options Database server host (env: PGHOST) Database server port (env: PGPORT) Database name (required) (env: PGDATABASE) Database user name (required) (env: PGUSER) Database password (optional, can also use PGPASSWORD env var or .pgpass file) You can provide the password in multiple ways (in order of precedence): ```bash .env File (Recommended) theme={null} # Create .env file with: # PGHOST=localhost # PGPORT=5432 # PGDATABASE=myapp # PGUSER=postgres # PGPASSWORD=mypassword pgschema plan --file schema.sql ``` ```bash Environment Variable theme={null} PGPASSWORD=mypassword pgschema plan \ --host localhost \ --db myapp \ --user postgres \ --file schema.sql ``` ```bash Command Line Flag theme={null} pgschema plan \ --host localhost \ --db myapp \ --user postgres \ --password mypassword \ --file schema.sql ``` ```bash .pgpass File theme={null} # Create ~/.pgpass file with: # hostname:port:database:username:password # localhost:5432:myapp:postgres:mypassword # Make sure file has correct permissions chmod 600 ~/.pgpass pgschema plan \ --host localhost \ --db myapp \ --user postgres \ --file schema.sql ``` **Password Resolution Order:** 1. Command line `--password` flag (highest priority) 2. `PGPASSWORD` environment variable 3. `.pgpass` file in user's home directory 4. PostgreSQL will prompt for password if none found See [dotenv (.env)](/cli/dotenv) for detailed configuration options. SSL mode for database connection (env: PGSSLMODE) Valid values: `disable`, `allow`, `prefer`, `require`, `verify-ca`, `verify-full` For `verify-ca` and `verify-full` modes, you can configure certificate paths using standard PostgreSQL environment variables (`PGSSLROOTCERT`, `PGSSLCERT`, `PGSSLKEY`). Schema name to target for comparison ## Plan Database Options By default, the plan command uses an embedded PostgreSQL instance to validate your desired state SQL. For schemas that require PostgreSQL extensions or have cross-schema references, you can provide an external database. See [External Plan Database](/cli/plan-db) for complete documentation. ## Plan Options Path to desired state SQL schema file Output human-readable format to stdout or file path Examples: * `--output-human stdout` - Display to terminal * `--output-human plan.txt` - Save to file Output JSON format to stdout or file path This JSON format is the same format accepted by the [apply command](/cli/apply) for executing migration plans. Examples: * `--output-json stdout` - Display to terminal * `--output-json plan.json` - Save to file Output SQL format to stdout or file path Examples: * `--output-sql stdout` - Display to terminal * `--output-sql migration.sql` - Save to file Disable colored output for human format when writing to stdout This is useful for: * Scripts and automation that need to parse output * CI/CD environments that don't support color codes * Redirecting output to files where color codes are unwanted Note: This flag only affects human format output to stdout. File output and JSON/SQL formats are never colored. ## Ignoring Objects You can exclude specific database objects from migration planning using a `.pgschemaignore` file. See [Ignore (.pgschemaignore)](/cli/ignore) for complete documentation. ## Examples ### Default Human-Readable Output ```bash theme={null} pgschema plan --host localhost --db myapp --user postgres --file schema.sql ``` ``` Plan: 2 to add, 1 to modify, 1 to drop. Summary by type: tables: 2 to add, 1 to modify, 1 to drop indexes: 1 to add functions: 1 to add Tables: + users + posts ~ products + discount_rate (column) - old_price (column) - legacy_data Indexes: + idx_users_email Functions: + update_timestamp() Transaction: true DDL to be executed: -------------------------------------------------- CREATE TABLE IF NOT EXISTS users ( id SERIAL PRIMARY KEY, name VARCHAR(255) NOT NULL, email VARCHAR(255) UNIQUE ); CREATE TABLE IF NOT EXISTS posts ( id SERIAL PRIMARY KEY, title TEXT NOT NULL, user_id INTEGER REFERENCES users(id) ); CREATE INDEX idx_users_email ON users(email); CREATE FUNCTION update_timestamp() RETURNS TRIGGER AS $$ BEGIN NEW.updated_at = NOW(); RETURN NEW; END; $$ LANGUAGE plpgsql; ALTER TABLE products ADD COLUMN discount_rate numeric(5,2); ALTER TABLE products DROP COLUMN old_price; DROP TABLE legacy_data; ``` ### JSON Output for Automation ```bash theme={null} pgschema plan \ --host localhost \ --db myapp \ --user postgres \ --file schema.sql \ --output-json stdout ``` ```json theme={null} { "version": "1.0.0", "pgschema_version": "1.0.0", "created_at": "2025-01-15T10:30:00Z", "source_fingerprint": { "hash": "abc123def456..." }, "diffs": [ { "sql": "CREATE TABLE IF NOT EXISTS users (\n id SERIAL PRIMARY KEY,\n name VARCHAR(255) NOT NULL,\n email VARCHAR(255) UNIQUE\n);", "type": "table", "operation": "create", "path": "public.users", "source": { "schema": "public", "name": "users", "type": "BASE_TABLE", "columns": [ { "name": "id", "position": 1, "data_type": "integer", "is_nullable": false, "default_value": "nextval('users_id_seq'::regclass)" }, { "name": "name", "position": 2, "data_type": "character varying", "is_nullable": false, "character_maximum_length": 255 } ], "constraints": { "users_pkey": { "schema": "public", "table": "users", "name": "users_pkey", "type": "PRIMARY_KEY", "columns": [ { "name": "id", "position": 1 } ] } } }, "can_run_in_transaction": true } ] } ``` ### SQL Migration Script ```bash theme={null} pgschema plan \ --host localhost \ --db myapp \ --user postgres \ --file schema.sql \ --output-sql stdout ``` ```sql theme={null} CREATE TABLE IF NOT EXISTS users ( id SERIAL PRIMARY KEY, name VARCHAR(255) NOT NULL, email VARCHAR(255) UNIQUE ); CREATE TABLE IF NOT EXISTS posts ( id SERIAL PRIMARY KEY, title TEXT NOT NULL, user_id INTEGER REFERENCES users(id) ); CREATE INDEX idx_users_email ON users(email); CREATE FUNCTION update_timestamp() RETURNS TRIGGER AS $$ BEGIN NEW.updated_at = NOW(); RETURN NEW; END; $$ LANGUAGE plpgsql; ALTER TABLE products ADD COLUMN discount_rate numeric(5,2); ALTER TABLE products DROP COLUMN old_price; DROP TABLE legacy_data; ``` ### Plan for Specific Schema ```bash theme={null} pgschema plan \ --host localhost \ --db multi_tenant \ --user postgres \ --schema tenant1 \ --file tenant_schema.sql ``` ## Use Cases ### Pre-deployment Validation ```bash theme={null} # Check what changes will be applied before deployment pgschema plan \ --host prod-db \ --db myapp \ --user readonly \ --file new_schema.sql # If plan looks good, proceed with apply pgschema apply \ --host prod-db \ --db myapp \ --user deployer \ --file new_schema.sql ``` ### CI/CD Integration ```yaml theme={null} # GitHub Actions example - name: Validate Schema Changes run: | pgschema plan \ --host ${{ secrets.DB_HOST }} \ --db ${{ secrets.DB_NAME }} \ --user ${{ secrets.DB_USER }} \ --file schema/proposed.sql \ --output-json plan.json \ --no-color # Check if there are destructive changes if jq -e '.summary.to_destroy > 0' plan.json; then echo "Warning: Destructive changes detected!" exit 1 fi ``` ### Change Tracking ```bash theme={null} # Generate plan and save for audit DATE=$(date +%Y%m%d_%H%M%S) pgschema plan \ --host localhost \ --db myapp \ --user postgres \ --file schema.sql \ --output-sql "migrations/plan_${DATE}.sql" ``` ### Multiple Output Formats You can generate multiple output formats simultaneously: ```bash theme={null} # Save all formats for comprehensive documentation pgschema plan \ --host localhost \ --db myapp \ --user postgres \ --file schema.sql \ --output-human plan.txt \ --output-json plan.json \ --output-sql migration.sql # Display human-readable format and save JSON for automation pgschema plan \ --host localhost \ --db myapp \ --user postgres \ --file schema.sql \ --output-human stdout \ --output-json automation/plan.json \ --no-color ``` Note: Only one output format can use `stdout`. If no output flags are specified, the command defaults to human-readable output to stdout with colors enabled. ## Comparison Direction The plan command is **unidirectional**: it always plans changes from the current state (database) to the desired state (file). ``` Current State (Database) → Desired State (File) ``` This ensures: * Consistent infrastructure-as-code workflow * Clear source of truth (the file) * Predictable change direction ## Include Directive Support The plan command supports include directives in schema files, allowing you to organize your schema across multiple files: ```sql theme={null} -- main.sql \i tables/users.sql \i tables/posts.sql \i functions/triggers.sql ``` The include processor will resolve all includes relative to the directory containing the main schema file. # External Plan Database Source: https://www.pgschema.com/cli/plan-db The `plan` and `apply` commands can use an external PostgreSQL database instead of the default embedded PostgreSQL instance for validating desired state schemas. This is useful in environments where embedded PostgreSQL has limitations. ## Overview By default, the `plan` command (and `apply` command in File Mode) spins up a temporary embedded PostgreSQL instance to apply and validate your desired state SQL. However, you can optionally provide your own PostgreSQL database using the `--plan-*` flags or `PGSCHEMA_PLAN_*` environment variables. **Note**: For the `apply` command, these options only apply when using File Mode (`--file`). When using Plan Mode (`--plan`), the plan has already been generated, so plan database options are not applicable. ### When to Use External Database Use an external database for plan generation when: * Your schema uses **PostgreSQL extensions** (like `hstore`, `postgis`, `uuid-ossp`, etc.) - The embedded database doesn't have extensions pre-installed, causing plan generation to fail with "type does not exist" errors ([#121](https://github.com/pgplex/pgschema/issues/121)) * Your schema has **cross-schema foreign key references** - The embedded approach only loads one schema at a time, breaking foreign key constraints that reference tables in other schemas ([#122](https://github.com/pgplex/pgschema/issues/122)) ### How It Works When using an external database: 1. **Temporary Schema Creation**: pgschema creates a temporary schema with a unique timestamp (e.g., `pgschema_tmp_20251030_154501_123456789`) 2. **SQL Application**: Your desired state SQL is applied to the temporary schema 3. **Schema Inspection**: The temporary schema is inspected to extract the desired state 4. **Comparison**: The desired state is compared with your target database's current state 5. **Cleanup**: The temporary schema is dropped (best effort) after plan generation ## Basic Usage ### With Plan Command ```bash theme={null} # Use external database for plan generation pgschema plan \ --file schema.sql \ --host localhost --db myapp --user postgres \ --plan-host localhost --plan-db pgschema_plan --plan-user postgres # With all options specified pgschema plan \ --file schema.sql \ --host localhost --port 5432 --db myapp --user postgres \ --plan-host localhost --plan-port 5432 --plan-db pgschema_plan --plan-user postgres --plan-password secret ``` ### With Apply Command (File Mode) ```bash theme={null} # Use external database when applying from file pgschema apply \ --file schema.sql \ --host localhost --db myapp --user postgres \ --plan-host localhost --plan-db pgschema_plan --plan-user postgres \ --auto-approve # The apply command generates a plan internally using the external database, # then applies the changes to the target database ``` ## Common Use Cases ### Using PostgreSQL Extensions If your schema uses extensions like `hstore`, `postgis`, or `uuid-ossp`, you need to install them in the plan database first: ```sql theme={null} -- In your plan database, install required extensions CREATE EXTENSION IF NOT EXISTS hstore; CREATE EXTENSION IF NOT EXISTS postgis; CREATE EXTENSION IF NOT EXISTS "uuid-ossp"; ``` Then run plan or apply with the external database: ```bash theme={null} # Install extensions in plan database psql -h localhost -U postgres -d pgschema_plan -c "CREATE EXTENSION IF NOT EXISTS hstore;" # Now run plan - it will work because extensions are available pgschema plan \ --file schema.sql \ --host localhost --db myapp --user postgres \ --plan-host localhost --plan-db pgschema_plan --plan-user postgres # Or use apply command directly (File Mode) pgschema apply \ --file schema.sql \ --host localhost --db myapp --user postgres \ --plan-host localhost --plan-db pgschema_plan --plan-user postgres \ --auto-approve ``` Your `schema.sql` can now use extension types: ```sql theme={null} CREATE TABLE products ( id SERIAL PRIMARY KEY, attributes HSTORE, -- Works because hstore extension is installed location GEOGRAPHY(POINT, 4326) -- Works because postgis is installed ); ``` ### Handling Cross-Schema Foreign Keys If your schema has foreign keys that reference tables in other schemas, you need to create those schemas in the plan database: ```sql theme={null} -- In your plan database, create referenced schemas and tables CREATE SCHEMA IF NOT EXISTS auth; CREATE TABLE IF NOT EXISTS auth.users ( id SERIAL PRIMARY KEY, email TEXT NOT NULL ); CREATE SCHEMA IF NOT EXISTS billing; CREATE TABLE IF NOT EXISTS billing.customers ( id SERIAL PRIMARY KEY, user_id INTEGER REFERENCES auth.users(id) ); ``` Then run plan or apply: ```bash theme={null} # Set up referenced schemas in plan database psql -h localhost -U postgres -d pgschema_plan << 'EOF' CREATE SCHEMA IF NOT EXISTS auth; CREATE TABLE IF NOT EXISTS auth.users (id SERIAL PRIMARY KEY, email TEXT NOT NULL); EOF # Now run plan for your main schema that references auth.users pgschema plan \ --file schema.sql \ --schema public \ --host localhost --db myapp --user postgres \ --plan-host localhost --plan-db pgschema_plan --plan-user postgres # Or use apply command to plan and apply in one step (File Mode) pgschema apply \ --file schema.sql \ --schema public \ --host localhost --db myapp --user postgres \ --plan-host localhost --plan-db pgschema_plan --plan-user postgres \ --auto-approve ``` Your `schema.sql` can now reference tables in other schemas: ```sql theme={null} CREATE TABLE orders ( id SERIAL PRIMARY KEY, user_id INTEGER REFERENCES auth.users(id), -- Cross-schema FK works total DECIMAL(10,2) ); ``` ## Configuration Options ### Using Command-Line Flags Plan database server host. If provided, uses external database instead of embedded PostgreSQL. Environment variable: `PGSCHEMA_PLAN_HOST` Plan database server port. Environment variable: `PGSCHEMA_PLAN_PORT` Plan database name. Required when `--plan-host` is provided. Environment variable: `PGSCHEMA_PLAN_DB` Plan database user name. Required when `--plan-host` is provided. Environment variable: `PGSCHEMA_PLAN_USER` Plan database password. Can also be provided via `PGSCHEMA_PLAN_PASSWORD` environment variable. Environment variable: `PGSCHEMA_PLAN_PASSWORD` Plan database SSL mode. Valid values: `disable`, `allow`, `prefer`, `require`, `verify-ca`, `verify-full` Environment variable: `PGSCHEMA_PLAN_SSLMODE` ### Using Environment Variables ```bash .env File (Recommended) theme={null} # Target database connection PGHOST=localhost PGPORT=5432 PGDATABASE=myapp PGUSER=postgres PGPASSWORD=mypassword # Plan database connection (optional) PGSCHEMA_PLAN_HOST=localhost PGSCHEMA_PLAN_PORT=5432 PGSCHEMA_PLAN_DB=pgschema_plan PGSCHEMA_PLAN_USER=postgres PGSCHEMA_PLAN_PASSWORD=planpassword PGSCHEMA_PLAN_SSLMODE=prefer # Run plan with external database pgschema plan --file schema.sql # Or apply (File Mode) with external database pgschema apply --file schema.sql --auto-approve ``` ```bash Environment Variables theme={null} # Set environment variables export PGHOST=localhost export PGDATABASE=myapp export PGUSER=postgres export PGPASSWORD=mypassword export PGSCHEMA_PLAN_HOST=localhost export PGSCHEMA_PLAN_DB=pgschema_plan export PGSCHEMA_PLAN_USER=postgres export PGSCHEMA_PLAN_PASSWORD=planpassword export PGSCHEMA_PLAN_SSLMODE=prefer # Run plan pgschema plan --file schema.sql # Or apply (File Mode) pgschema apply --file schema.sql --auto-approve ``` ```bash Command Line Only theme={null} # Plan command - all options as flags (no environment variables) pgschema plan \ --file schema.sql \ --host localhost \ --db myapp \ --user postgres \ --password mypassword \ --plan-host localhost \ --plan-db pgschema_plan \ --plan-user postgres \ --plan-password planpassword # Apply command (File Mode) - all options as flags pgschema apply \ --file schema.sql \ --host localhost \ --db myapp \ --user postgres \ --password mypassword \ --plan-host localhost \ --plan-db pgschema_plan \ --plan-user postgres \ --plan-password planpassword \ --auto-approve ``` ## Database Permissions The plan database user needs the following permissions: ```sql theme={null} -- Minimum required permissions GRANT CREATE ON DATABASE pgschema_plan TO your_plan_user; GRANT USAGE ON SCHEMA public TO your_plan_user; ``` The user must be able to: * Create and drop schemas * Create tables, indexes, functions, and other schema objects * Set search\_path ## See Also * [Plan Command](/cli/plan) - Main plan command documentation * [Apply Command](/cli/apply) - Applying migration plans * [Environment Variables](/cli/dotenv) - Managing environment configuration # For Coding Agent Source: https://www.pgschema.com/coding-agent This guide shows how to integrate pgschema with AI coding agents. ## llms.txt pgschema provides official llms.txt files for AI agents that support [llms.txt](https://llmstxt.org/) context: * **Concise version**: [pgschema.com/llms.txt](https://www.pgschema.com/llms.txt) - Essential commands and patterns * **Full version**: [pgschema.com/llms-full.txt](https://www.pgschema.com/llms-full.txt) - Complete API reference and examples AI agents can automatically fetch these context files to understand pgschema's core workflow: dump → edit → plan → apply. ## CLAUDE.md / AGENTS.md For optimal code integration, add this configuration to your [CLAUDE.md](https://claude.md) or [AGENTS.md](https://agents.md/): ````markdown theme={null} ## Project Overview `pgschema` is a CLI tool that brings Terraform-style declarative schema migration workflow to PostgreSQL. It provides a dump/edit/plan/apply workflow for database schema changes: - **Dump**: Extract current schema in a developer-friendly format - **Edit**: Modify schema files to represent desired state - **Plan**: Compare desired state with current database and generate migration plan - **Apply**: Execute the migration with safety features like concurrent change detection The tool supports PostgreSQL versions 14-17 and handles most common schema objects (tables, indexes, functions, views, constraints, etc.). ## Core Workflow ### 1. Dump Current Schema Extract the current database schema in a clean, developer-friendly format: ```bash # Basic dump PGPASSWORD=password pgschema dump --host localhost --db myapp --user postgres > schema.sql # Multi-file output for large schemas pgschema dump --host localhost --db myapp --user postgres --multi-file --file schema.sql # Specific schema (not public) pgschema dump --host localhost --db myapp --user postgres --schema tenant1 > schema.sql ``` ### 2. Edit Schema Files Modify the dumped schema files to represent your desired state. pgschema compares the desired state (files) with current state (database) to generate migrations. ### 3. Generate Migration Plan Preview what changes will be applied: ```bash # Generate plan PGPASSWORD=password pgschema plan --host localhost --db myapp --user postgres --file schema.sql # Save plan in multiple formats pgschema plan --host localhost --db myapp --user postgres --file schema.sql \ --output-human plan.txt --output-json plan.json --output-sql migration.sql ``` ### 4. Apply Changes Execute the migration: ```bash # Apply from file (generate plan and apply) PGPASSWORD=password pgschema apply --host localhost --db myapp --user postgres --file schema.sql # Apply from pre-generated plan pgschema apply --host localhost --db myapp --user postgres --plan plan.json # Auto-approve for CI/CD pgschema apply --host localhost --db myapp --user postgres --file schema.sql --auto-approve ``` ## Key Features ### Connection Options Use these connection parameters for all commands: - `--host` - Database host (default: localhost) - `--port` - Database port (default: 5432) - `--db` - Database name (required) - `--user` - Database user (required) - `--password` - Database password (or use PGPASSWORD environment variable) - `--sslmode` - SSL mode (default: prefer, or use PGSSLMODE environment variable) - `--schema` - Target schema (default: public) ### Multi-File Schema Management For large schemas, use `--multi-file` to organize by object type: ```bash pgschema dump --host localhost --db myapp --user postgres --multi-file --file schema/main.sql ``` Creates organized directory structure: ``` schema/ ├── main.sql # Contains \i include directives ├── tables/ ├── views/ ├── functions/ └── indexes/ ``` ### Online DDL Support pgschema automatically uses safe patterns for production: - `CREATE INDEX CONCURRENTLY` for new indexes - `NOT VALID` + `VALIDATE CONSTRAINT` for constraints - Progress monitoring for long-running operations ### Safety Features - **Schema fingerprinting**: Detects concurrent schema changes - **Transaction handling**: Automatic rollback on failure - **Dependency ordering**: Objects created/dropped in correct order - **Lock timeout control**: `--lock-timeout` for production safety ## Common Use Cases ### Development Workflow ```bash # 1. Dump production schema pgschema dump --host prod --db myapp --user postgres > baseline.sql # 2. Make changes to baseline.sql (desired state) # 3. Test against staging pgschema plan --host staging --db myapp --user postgres --file baseline.sql # 4. Apply to staging pgschema apply --host staging --db myapp --user postgres --file baseline.sql ``` ### Multi-Tenant Schema Management ```bash # Dump template schema pgschema dump --host localhost --db myapp --user postgres --schema template > template.sql # Apply to multiple tenants pgschema apply --host localhost --db myapp --user postgres --schema tenant1 --file template.sql pgschema apply --host localhost --db myapp --user postgres --schema tenant2 --file template.sql ``` ### CI/CD Integration ```bash # Generate and save plan for review pgschema plan --host prod --db myapp --user postgres --file schema.sql \ --output-json plan.json --no-color # Apply in deployment pgschema apply --host prod --db myapp --user postgres --plan plan.json --auto-approve ``` ## Supported Schema Objects pgschema handles most common PostgreSQL schema objects: - **Tables**: Full DDL including columns, constraints, defaults - **Indexes**: Regular and partial indexes with concurrent creation - **Views**: Regular and materialized views - **Functions**: PL/pgSQL and SQL functions with all options - **Procedures**: Stored procedures - **Triggers**: Table triggers with dependency handling - **Types**: Custom types (enum, composite, domain) - **Constraints**: Primary keys, foreign keys, unique, check constraints - **Sequences**: Auto-generated and custom sequences - **Policies**: Row-level security policies - **Comments**: Comments on all supported objects For unsupported objects, see the documentation. ## Additional Resources - **Concise reference**: [pgschema.com/llms.txt](https://www.pgschema.com/llms.txt) - Essential commands and patterns - **Complete reference**: [pgschema.com/llms-full.txt](https://www.pgschema.com/llms-full.txt) - Full API reference and examples - **Documentation**: [pgschema.com](https://www.pgschema.com) - Complete documentation ```` # FAQ Source: https://www.pgschema.com/faq ## Getting Help * Report bugs or request features? 👉 [GitHub Issues](https://github.com/pgplex/pgschema/issues) * Need commercial support? 👉 [Contact us](mailto:support@bytebase.com) ## Compatibility ### Which PostgreSQL versions are supported? pgschema is tested with PostgreSQL versions 14, 15, 16, 17, and 18. While it may work with older versions, we recommend using one of these tested versions for the best experience. ### What operating systems are supported? * **Linux** (AMD64, ARM64) * **macOS** (Apple Silicon, Intel) Pre-built binaries are available on the [releases page](https://github.com/pgplex/pgschema/releases). ### Can I use pgschema with cloud PostgreSQL services? Yes, pgschema works with any PostgreSQL-compatible database that you can connect to, including: * Amazon RDS for PostgreSQL * Google Cloud SQL for PostgreSQL * Azure Database for PostgreSQL * Supabase * Neon * And other PostgreSQL-compatible services ### How do I handle database-specific settings? pgschema focuses on schema structure (tables, indexes, functions, etc.) and doesn't manage: * Database-level settings * User/role management * Tablespace configuration * Extensions These should be managed separately through your infrastructure tooling. See [unsupported syntax](/syntax/unsupported). ### What happens if a migration fails? 1. For most changes, pgschema executes them in a transaction and will roll back on failure 2. For changes containing non-transactional statements (like `CREATE INDEX CONCURRENTLY`), each statement runs in its own transaction 3. Your database will remain in its previous state for transactional changes. When migrations include non-transactional DDL statements, partial application may occur if a later statement fails 4. You'll see a clear error message indicating what went wrong 5. Fix the issue in your schema file and try again ## Troubleshooting ### Why does pgschema show no changes when I expect some? Common causes: * The changes are in objects pgschema doesn't track yet * The database already matches your desired state * You're comparing against the wrong schema (check [`--schema`](/cli/plan#param-schema) flag) Run with `--debug` flag for more detailed output. ### How do I debug connection issues? For connection problems: 1. Verify PostgreSQL is running: `pg_isready -h host -p port` 2. Check your credentials and permissions 3. Ensure the database exists 4. Check network/firewall settings 5. Try connecting with `psql` using the same parameters ### What permissions does pgschema need? pgschema needs: * `CONNECT` privilege on the database * `USAGE` privilege on the target schema * `CREATE` privilege on the schema (for new objects) * Appropriate privileges for all object types you're managing For read-only operations (dump, plan), only `SELECT` privileges are needed. ## Advanced Topics ### How does pgschema handle dependencies? pgschema automatically: * Detects dependencies between database objects * Orders DDL statements to respect dependencies * Creates objects in the correct order * Drops objects in reverse dependency order ### How does pgschema detect concurrent schema changes? pgschema uses fingerprinting to detect if the database schema has changed since the migration plan was created: * A fingerprint (hash) of the current schema is calculated during planning * Before applying changes, pgschema verifies the fingerprint still matches * If the schema has changed (concurrent modifications), the apply operation will fail safely * This prevents conflicts and ensures migrations are applied to the expected schema state ### Can I use pgschema for database comparisons? Yes! You can compare schemas across databases: ```bash theme={null} # Dump both schemas pgschema dump --host db1 --db myapp > schema1.sql pgschema dump --host db2 --db myapp > schema2.sql # Compare with diff diff schema1.sql schema2.sql ``` ## Why is pgschema free? We have no current plans to charge for pgschema. pgschema is sponsored by [Bytebase](https://www.bytebase.com), a commercial database DevSecOps platform. Bytebase covers the needs of teams that require controls beyond schema migration — data access control, data masking, audit logging, and multi-database management across an organization. ## License Apache License 2.0 # Introduction Source: https://www.pgschema.com/index Declarative schema migration for Postgres ## What is pgschema? `pgschema` is a CLI tool that brings terraform-style declarative schema migration workflow to Postgres: * **Dump** a Postgres schema in a developer-friendly format with support for all common objects * **Edit** a schema to the desired state * **Plan** a schema migration by comparing desired state with current database state * **Apply** a schema migration with concurrent change detection, transaction-adaptive execution, and lock timeout control Think of it as Terraform for your Postgres schemas - declare your desired state, generate plan, preview changes, and apply them with confidence. Watch in action: ## Why pgschema? Traditional database migration tools often require you to write migrations manually or rely on ORMs that may not support all Postgres features. pgschema takes a different approach: * **Declarative**: Define your desired schema state in SQL files * **Schema-based**: Compare at the schema level instead of the database level to reconcile schemas across different tenants * **Comprehensive**: Support most common Postgres objects under a schema * **Transparent**: Show exactly what SQL will be executed before applying changes ## Getting Started Install pgschema using Go or download pre-built binaries Learn the basics with a hands-on tutorial # Installation Source: https://www.pgschema.com/installation **Supported Operating Systems** Windows is not supported. Please use WSL (Windows Subsystem for Linux) or a Linux VM. * Linux (AMD64, ARM64) * macOS (Apple Silicon, Intel) ## macOS Install pgschema using Homebrew: ```bash theme={null} # Install pgschema brew tap pgplex/pgschema brew install pgschema # Verify installation pgschema --help ``` To update to the latest version: ```bash theme={null} brew upgrade pgschema ``` Download and install the pre-built binary for macOS. Replace `v1.0.0` with the [latest release version](https://github.com/pgplex/pgschema/releases). ```bash theme={null} # Download the binary curl -L https://github.com/pgplex/pgschema/releases/download/v1.0.0/pgschema-1.0.0-darwin-arm64 -o pgschema # Make it executable chmod +x pgschema # Move to PATH sudo mv pgschema /usr/local/bin/ # Verify installation pgschema --help ``` ## Debian/Ubuntu (DEB) Install pgschema using the DEB package on Debian, Ubuntu, or other Debian-based distributions. Replace `v1.0.0` and `1.0.0` with the [latest release version](https://github.com/pgplex/pgschema/releases). ```bash theme={null} # Download the DEB package curl -LO https://github.com/pgplex/pgschema/releases/download/v1.0.0/pgschema_1.0.0_amd64.deb # Install the package sudo dpkg -i pgschema_1.0.0_amd64.deb # Verify installation pgschema --help ``` ```bash theme={null} # Download the DEB package curl -LO https://github.com/pgplex/pgschema/releases/download/v1.0.0/pgschema_1.0.0_arm64.deb # Install the package sudo dpkg -i pgschema_1.0.0_arm64.deb # Verify installation pgschema --help ``` ## RedHat/Fedora (RPM) Install pgschema using the RPM package on RedHat, Fedora, CentOS, or other RPM-based distributions. Replace `v1.0.0` and `1.0.0` with the [latest release version](https://github.com/pgplex/pgschema/releases). ```bash theme={null} # Download the RPM package curl -LO https://github.com/pgplex/pgschema/releases/download/v1.0.0/pgschema-1.0.0-1.x86_64.rpm # Install the package sudo rpm -i pgschema-1.0.0-1.x86_64.rpm # Verify installation pgschema --help ``` ```bash theme={null} # Download the RPM package curl -LO https://github.com/pgplex/pgschema/releases/download/v1.0.0/pgschema-1.0.0-1.aarch64.rpm # Install the package sudo rpm -i pgschema-1.0.0-1.aarch64.rpm # Verify installation pgschema --help ``` ## Pre-built Binaries (Linux) For Linux systems without package manager support, download pre-built binaries from the [GitHub releases page](https://github.com/pgplex/pgschema/releases). Replace `v1.0.0` with the [latest release version](https://github.com/pgplex/pgschema/releases). ```bash theme={null} # Download the binary curl -L https://github.com/pgplex/pgschema/releases/download/v1.0.0/pgschema-linux-amd64 -o pgschema # Make it executable chmod +x pgschema # Move to PATH sudo mv pgschema /usr/local/bin/ # Verify installation pgschema --help ``` ```bash theme={null} # Download the binary curl -L https://github.com/pgplex/pgschema/releases/download/v1.0.0/pgschema-linux-arm64 -o pgschema # Make it executable chmod +x pgschema # Move to PATH sudo mv pgschema /usr/local/bin/ # Verify installation pgschema --help ``` ## Docker Run pgschema using Docker without installing it locally: ```bash theme={null} # Pull the latest image docker pull pgplex/pgschema:latest # Run pgschema commands docker run --rm pgplex/pgschema:latest --help ``` ```bash theme={null} # Dump a localhost database (requires --network host) docker run --rm --network host \ -e PGPASSWORD=mypassword \ pgplex/pgschema:latest dump \ --host localhost \ --port 5432 \ --db mydb \ --schema public \ --user myuser > schema.sql ``` ```bash theme={null} # Plan migration and save to plan.json on host docker run --rm --network host \ -v "$(pwd):/workspace" \ -e PGPASSWORD=mypassword \ pgplex/pgschema:latest plan \ --host localhost \ --port 5432 \ --db mydb \ --schema public \ --user myuser \ --file /workspace/schema.sql \ --output-json /workspace/plan.json ``` ```bash theme={null} # Apply changes using the saved plan.json docker run --rm --network host \ -v "$(pwd):/workspace" \ -e PGPASSWORD=mypassword \ pgplex/pgschema:latest apply \ --host localhost \ --port 5432 \ --db mydb \ --schema public \ --user myuser \ --plan /workspace/plan.json ``` **Important Docker Usage Notes:** * Use `--network host` when connecting to databases on localhost/127.0.0.1 * Mount volumes with `-v "$(pwd):/workspace"` to access local schema files * Files inside the container should be referenced with `/workspace/` prefix ## Go Install If you have Go 1.24.0 or later installed, you can install pgschema directly: ```bash theme={null} go install github.com/pgplex/pgschema@latest ``` This will install the latest version of pgschema to your `$GOPATH/bin` directory. After installation, verify that pgschema is working correctly: ```bash theme={null} # View help and check version pgschema --help ``` ## Build from Source Clone the repository and build from source: ```bash theme={null} # Clone the repository git clone https://github.com/pgplex/pgschema.git cd pgschema # Build the binary go build -v -o pgschema . # Optional: Install to system sudo mv pgschema /usr/local/bin/ ``` Building from source requires Go 1.24.0 or later. # Quickstart Source: https://www.pgschema.com/quickstart This guide will walk you through the core pgschema workflow: dumping a schema, making edits, planning changes, and applying migrations. By the end, you'll understand how to manage Postgres schemas declaratively. ## Prerequisites Before starting, ensure you have: * pgschema installed ([see installation guide](/installation)) * Access to a PostgreSQL database (14+) * Database credentials with appropriate permissions ## Step 1: Dump Your Current Schema First, let's capture the current state of your database schema: ```bash theme={null} $ PGPASSWORD=testpwd1 pgschema dump \ --host localhost \ --db testdb \ --user postgres \ --schema public > schema.sql ``` This creates a `schema.sql` file containing your complete schema definition. Let's look at what it contains: ```sql schema.sql theme={null} -- Example output CREATE TABLE users ( id SERIAL PRIMARY KEY, email VARCHAR(255) UNIQUE NOT NULL, created_at TIMESTAMP DEFAULT NOW() ); CREATE INDEX idx_users_email ON users(email); ``` The output is clean and developer-friendly, unlike the verbose output from `pg_dump`. ## Step 2: Make Schema Changes Now, let's modify the `schema.sql` file to add new features. Edit the file to include: ```sql schema.sql (modified) theme={null} CREATE TABLE users ( id SERIAL PRIMARY KEY, email VARCHAR(255) UNIQUE NOT NULL, name VARCHAR(100), -- New column created_at TIMESTAMP DEFAULT NOW(), updated_at TIMESTAMP DEFAULT NOW() -- New column ); -- New index CREATE INDEX idx_users_name ON users(name); CREATE INDEX idx_users_email ON users(email); -- New table CREATE TABLE posts ( id SERIAL PRIMARY KEY, user_id INTEGER REFERENCES users(id), title VARCHAR(200) NOT NULL, content TEXT, created_at TIMESTAMP DEFAULT NOW() ); ``` ## Step 3: Generate Plan ```bash theme={null} $ PGPASSWORD=testpwd1 pgschema plan \ --host localhost \ --db testdb \ --user postgres \ --schema public \ --file schema.sql \ --output-json plan.json \ --output-human stdout ``` This will save the migration plan to `plan.json` and print the human-readable version to `stdout`: ```sql plan.txt theme={null} Plan: 2 to add, 1 to modify. Summary by type: tables: 1 to add, 1 to modify indexes: 1 to add Tables: + public.posts ~ public.users + column name + column updated_at Indexes: + public.idx_users_name Transaction: true DDL to be executed: -------------------------------------------------- ALTER TABLE users ADD COLUMN name VARCHAR(100); ALTER TABLE users ADD COLUMN updated_at TIMESTAMP DEFAULT NOW(); CREATE INDEX idx_users_name ON users (name); CREATE TABLE posts ( id SERIAL NOT NULL, user_id integer, title VARCHAR(200) NOT NULL, content text, created_at timestamp DEFAULT NOW(), PRIMARY KEY (id), FOREIGN KEY (user_id) REFERENCES users (id) ); ``` ## Step 4: Apply Changes When you're ready to apply the changes: ```bash theme={null} # Apply using the plan file from Step 3 $ PGPASSWORD=testpwd1 pgschema apply \ --host localhost \ --db testdb \ --user postgres \ --schema public \ --plan plan.json ``` ### Alternative: Direct Apply You can also skip the separate planning step and apply changes directly from the schema file: ```bash theme={null} # Apply directly from schema file (combines plan + apply) $ PGPASSWORD=testpwd1 pgschema apply \ --host localhost \ --db testdb \ --user postgres \ --schema public \ --file schema.sql ``` This approach automatically generates the plan internally and applies the changes in one command. The two-phase workflow (plan then apply) is recommended for production environments where you want to review and save the plan before execution. Direct apply is convenient for development environments. pgschema will: 1. Show you the migration plan 2. Ask for confirmation (unless using [`--auto-approve`](/cli/apply#param-auto-approve)) 3. Apply the changes 4. Report success or any errors # ALTER DEFAULT PRIVILEGES Source: https://www.pgschema.com/syntax/alter_default_privileges ## Syntax ```sql theme={null} alter_default_privileges ::= ALTER DEFAULT PRIVILEGES [ FOR ROLE role_name ] IN SCHEMA schema_name grant_or_revoke_clause grant_or_revoke_clause ::= GRANT privilege_list ON object_type TO grantee [ WITH GRANT OPTION ] | REVOKE [ GRANT OPTION FOR ] privilege_list ON object_type FROM grantee privilege_list ::= privilege [, ...] privilege ::= SELECT | INSERT | UPDATE | DELETE | TRUNCATE | REFERENCES | TRIGGER -- for TABLES | USAGE | SELECT | UPDATE -- for SEQUENCES | EXECUTE -- for FUNCTIONS/ROUTINES | USAGE -- for TYPES object_type ::= TABLES | SEQUENCES | FUNCTIONS | ROUTINES | TYPES grantee ::= role_name | PUBLIC ``` pgschema understands the following `ALTER DEFAULT PRIVILEGES` features: * **Object types**: TABLES, SEQUENCES, FUNCTIONS, ROUTINES, TYPES * **Table privileges**: SELECT, INSERT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER * **Sequence privileges**: USAGE, SELECT, UPDATE * **Function/Routine privileges**: EXECUTE * **Type privileges**: USAGE * **Schema scope**: Privileges apply to objects created in the specified schema * **WITH GRANT OPTION**: Allow grantee to grant the same privileges to others * **PUBLIC**: Special grantee representing all roles ## Examples ### Grant default table privileges ```sql theme={null} -- Grant SELECT on all future tables to PUBLIC ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO PUBLIC; -- Grant multiple privileges to a role ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT INSERT, UPDATE ON TABLES TO app_user; -- Grant with ability to re-grant ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT, INSERT ON TABLES TO admin_user WITH GRANT OPTION; ``` ### Grant default function privileges ```sql theme={null} -- Grant EXECUTE on all future functions ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT EXECUTE ON FUNCTIONS TO api_user; ``` ### Grant default sequence privileges ```sql theme={null} -- Grant USAGE and SELECT on all future sequences ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT USAGE, SELECT ON SEQUENCES TO app_user; ``` ### Grant default type privileges ```sql theme={null} -- Grant USAGE on all future types ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT USAGE ON TYPES TO app_user; ``` ### Alter default privileges ```sql theme={null} -- Expand table privileges and remove sequence privileges -- Before: -- ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO app_user; -- ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT USAGE ON SEQUENCES TO app_user; -- After: -- ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT, INSERT, UPDATE ON TABLES TO app_user; -- Migration generates: ALTER DEFAULT PRIVILEGES IN SCHEMA public REVOKE USAGE ON SEQUENCES FROM app_user; ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT INSERT, UPDATE ON TABLES TO app_user; ``` ### Add grant option to existing privileges ```sql theme={null} -- Before: ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO app_user; -- After: ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT, INSERT, UPDATE ON TABLES TO app_user WITH GRANT OPTION; -- Migration generates: ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT INSERT, UPDATE ON TABLES TO app_user WITH GRANT OPTION; ALTER DEFAULT PRIVILEGES IN SCHEMA public REVOKE SELECT ON TABLES FROM app_user; ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO app_user WITH GRANT OPTION; ``` ### Revoke default privileges ```sql theme={null} -- Remove all default privileges -- Before: -- ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO readonly_user; -- ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT INSERT, UPDATE, DELETE ON TABLES TO app_user; -- After: (no default privileges) -- Migration generates: ALTER DEFAULT PRIVILEGES IN SCHEMA public REVOKE DELETE, INSERT, UPDATE ON TABLES FROM app_user; ALTER DEFAULT PRIVILEGES IN SCHEMA public REVOKE SELECT ON TABLES FROM readonly_user; ``` ## Canonical Format When generating migration SQL, pgschema produces default privileges in the following canonical format: ```sql theme={null} -- For granting ALTER DEFAULT PRIVILEGES IN SCHEMA schema_name GRANT privilege_list ON object_type TO grantee[ WITH GRANT OPTION]; -- For revoking ALTER DEFAULT PRIVILEGES IN SCHEMA schema_name REVOKE privilege_list ON object_type FROM grantee; ``` **Key characteristics of the canonical format:** * Privileges are sorted alphabetically (e.g., `DELETE, INSERT, SELECT, UPDATE`) * Each grantee is handled separately (no combining multiple roles) * When changing WITH GRANT OPTION status, the privilege is revoked and re-granted * Object types use plural form (TABLES, SEQUENCES, FUNCTIONS, TYPES) # COMMENT ON Source: https://www.pgschema.com/syntax/comment_on ## Syntax ```sql theme={null} comment_on ::= COMMENT ON object_type object_name IS 'comment_text' | COMMENT ON object_type object_name IS NULL object_type ::= COLUMN | FUNCTION | INDEX | MATERIALIZED VIEW | PROCEDURE | TABLE | VIEW object_name ::= [schema.]name | [schema.]table_name.column_name -- for columns | [schema.]function_name(argument_types) -- for functions | [schema.]procedure_name(argument_types) -- for procedures ``` `COMMENT ON` is supported for the following objects: * Column * Function * Index * Materialized View * Procedure * Table * View # CREATE DOMAIN Source: https://www.pgschema.com/syntax/create_domain ## Syntax ```sql theme={null} create_domain ::= CREATE DOMAIN domain_name AS data_type [ DEFAULT expression ] [ NOT NULL ] [ constraint_definition [, ...] ] domain_name ::= [schema.]name constraint_definition ::= CHECK ( expression ) | CONSTRAINT constraint_name CHECK ( expression ) ``` pgschema understands the following `CREATE DOMAIN` features: * **Schema-qualified names**: Domains can be defined in specific schemas * **Base types**: Any valid PostgreSQL data type as the underlying type * **Default values**: Default expressions for the domain * **NOT NULL constraints**: Enforce non-null values * **CHECK constraints**: * Anonymous CHECK constraints * Named constraints with CONSTRAINT clause * Multiple constraints per domain * Expressions using VALUE keyword to reference the domain value ## Canonical Format When generating migration SQL, pgschema produces domains in the following canonical format: ```sql theme={null} -- Simple domain without constraints CREATE DOMAIN [schema.]domain_name AS data_type; -- Domain with constraints (multi-line format) CREATE DOMAIN [schema.]domain_name AS data_type [ DEFAULT expression ] [ NOT NULL ] [ CONSTRAINT constraint_name CHECK ( expression ) ] [ CHECK ( expression ) ]; ``` **Key characteristics of the canonical format:** * Uses single-line format for simple domains without constraints * Uses multi-line format with indentation when domain has constraints, defaults, or NOT NULL * DEFAULT clause appears before NOT NULL when both are present * Named constraints use `CONSTRAINT constraint_name` prefix * Anonymous constraints are listed without the CONSTRAINT keyword * Each constraint is on its own indented line for readability * For DROP operations: `DROP DOMAIN IF EXISTS domain_name RESTRICT;` **Note**: Domain modifications in pgschema currently require dropping and recreating the domain, as PostgreSQL has limited ALTER DOMAIN support and complex dependencies make in-place modifications challenging. # CREATE FUNCTION Source: https://www.pgschema.com/syntax/create_function ## Syntax ```sql theme={null} create_function ::= CREATE [ OR REPLACE ] FUNCTION function_name ( [ parameter_list ] ) RETURNS return_type LANGUAGE language_name [ SECURITY { DEFINER | INVOKER } ] [ volatility ] [ STRICT ] AS function_body function_name ::= [schema.]name parameter_list ::= parameter [, ...] parameter ::= [ parameter_mode ] [ parameter_name ] parameter_type [ DEFAULT default_value ] parameter_mode ::= IN | OUT | INOUT argument_list ::= data_type [, ...] return_type ::= data_type | SETOF data_type | TABLE ( column_name data_type [, ...] ) | trigger language_name ::= plpgsql | sql | c | internal volatility ::= IMMUTABLE | STABLE | VOLATILE function_body ::= 'definition' | $$definition$$ | $tag$definition$tag$ ``` pgschema understands the following `CREATE FUNCTION` features: * **Schema-qualified names**: Functions can be defined in specific schemas * **Parameters**: * IN, OUT, INOUT parameter modes * Default parameter values * Named parameters with data types * **Return types**: * Scalar data types * SETOF for set-returning functions * TABLE for table-returning functions * trigger for trigger functions * **Languages**: Any PostgreSQL procedural language (plpgsql, sql, c, internal, etc.) * **Security**: DEFINER or INVOKER * **Volatility**: IMMUTABLE, STABLE, or VOLATILE * **STRICT**: Returns NULL automatically if any argument is NULL * **Function body**: Any valid function definition with proper dollar-quoting ## Canonical Format When generating migration SQL, pgschema produces functions in the following canonical format: ```sql theme={null} CREATE OR REPLACE FUNCTION [schema.]function_name( parameter_name parameter_type[ DEFAULT default_value][, ...] ) RETURNS return_type LANGUAGE language_name SECURITY { DEFINER | INVOKER } [ IMMUTABLE | STABLE | VOLATILE ] [ STRICT ] AS $tag$function_body$tag$; ``` **Key characteristics of the canonical format:** * Always uses `CREATE OR REPLACE FUNCTION` for modifications * Parameters are formatted with line breaks for readability * Explicitly specifies SECURITY mode (`INVOKER` is default if not specified) * Includes volatility when specified * Includes STRICT when the function should return NULL on NULL input * Uses intelligent dollar-quoting with automatic tag generation to avoid conflicts with function body content * For DROP operations: `DROP FUNCTION IF EXISTS function_name(argument_types);` # CREATE INDEX Source: https://www.pgschema.com/syntax/create_index ## Syntax ```sql theme={null} create_index ::= CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ IF NOT EXISTS ] index_name ON table_name [ USING method ] ( index_element [, ...] ) [ WHERE condition ] index_name ::= name table_name ::= [schema.]name method ::= btree | hash | gist | spgist | gin | brin index_element ::= column_name [ direction ] [ operator_class ] | ( expression ) [ direction ] [ operator_class ] direction ::= ASC | DESC operator_class ::= name ``` pgschema understands the following `CREATE INDEX` features: * **Index types**: * Regular indexes * UNIQUE indexes for enforcing uniqueness * Primary key indexes (handled with constraints) * **Index methods**: btree (default), hash, gist, spgist, gin, brin * **Concurrent creation**: CONCURRENTLY option for creating indexes without blocking writes * **Columns and expressions**: * Simple column indexes * Multi-column indexes * Expression/functional indexes (e.g., LOWER(column), JSON operators) * **Sort direction**: ASC (default) or DESC for each column * **Operator classes**: Custom operator classes for specialized indexing * **Partial indexes**: WHERE clause for indexing subset of rows * **Schema qualification**: Indexes can be created in specific schemas ## Canonical Format When generating migration SQL, pgschema produces indexes in the following canonical format: ```sql theme={null} CREATE [UNIQUE] INDEX [CONCURRENTLY] IF NOT EXISTS index_name ON [schema.]table_name [USING method] ( column_or_expression[ direction][, ...] )[ WHERE condition]; ``` **Key characteristics of the canonical format:** * Always includes `IF NOT EXISTS` to prevent errors on re-running * Includes `UNIQUE` when the index enforces uniqueness * Includes `CONCURRENTLY` when specified (cannot run in a transaction) * Only includes `USING method` when not btree (the default) * Only includes direction when not ASC (the default) * JSON expressions are wrapped in double parentheses: `((data->>'key'))` * Partial index WHERE clause is included when present * For DROP operations: `DROP INDEX IF EXISTS [schema.]index_name;` **Note on transactions:** * Regular index creation can run in a transaction * `CREATE INDEX CONCURRENTLY` cannot run in a transaction and will be executed separately # CREATE MATERIALIZED VIEW Source: https://www.pgschema.com/syntax/create_materialized_view ## Syntax ```sql theme={null} create_materialized_view ::= CREATE MATERIALIZED VIEW [ IF NOT EXISTS ] view_name AS select_statement [ WITH [ NO ] DATA ] view_name ::= [schema.]name select_statement ::= SELECT ... ``` pgschema understands the following `CREATE MATERIALIZED VIEW` features: * **Schema-qualified names**: Materialized views can be defined in specific schemas * **IF NOT EXISTS**: Optional clause to avoid errors if the view already exists * **AS clause**: Any valid SELECT statement that defines the view's contents * **WITH \[NO] DATA**: Whether to populate the view immediately (WITH DATA) or defer population (WITH NO DATA) ## Canonical Format When generating migration SQL, pgschema produces materialized views in the following canonical format: ```sql theme={null} CREATE MATERIALIZED VIEW IF NOT EXISTS [schema.]view_name AS select_statement; ``` **Key characteristics of the canonical format:** * Always uses `CREATE MATERIALIZED VIEW IF NOT EXISTS` for creation * Schema qualification included when necessary * Preserves the original SELECT statement formatting * For modifications: Materialized views cannot use `CREATE OR REPLACE`, so changes require a drop/create cycle * For DROP operations: `DROP MATERIALIZED VIEW IF EXISTS view_name;` **Note on modifications:** Unlike regular views which support `CREATE OR REPLACE`, materialized views must be dropped and recreated when their definition changes. pgschema handles this automatically during migration planning. # CREATE POLICY Source: https://www.pgschema.com/syntax/create_policy ## Syntax ```sql theme={null} create_policy ::= CREATE POLICY policy_name ON table_name [ AS { PERMISSIVE | RESTRICTIVE } ] [ FOR command ] [ TO { role_name | PUBLIC | CURRENT_USER | SESSION_USER } [, ...] ] [ USING ( using_expression ) ] [ WITH CHECK ( check_expression ) ] alter_rls ::= ALTER TABLE table_name ENABLE ROW LEVEL SECURITY | ALTER TABLE table_name DISABLE ROW LEVEL SECURITY | ALTER TABLE table_name FORCE ROW LEVEL SECURITY | ALTER TABLE table_name NO FORCE ROW LEVEL SECURITY policy_name ::= identifier table_name ::= [schema.]table command ::= ALL | SELECT | INSERT | UPDATE | DELETE using_expression ::= sql_expression check_expression ::= sql_expression ``` pgschema understands the following `CREATE POLICY` features: * **Policy names**: Unique identifiers for Row Level Security policies * **Table references**: Schema-qualified table names where policies apply * **Policy type**: * PERMISSIVE (default) - allows access when policy evaluates to true * RESTRICTIVE - denies access unless policy evaluates to true * **Commands**: * ALL - applies to all commands * SELECT - applies to SELECT queries * INSERT - applies to INSERT commands * UPDATE - applies to UPDATE commands * DELETE - applies to DELETE commands * **Roles**: Specific roles, PUBLIC, CURRENT\_USER, or SESSION\_USER that the policy applies to * **USING clause**: Boolean expression checked for existing rows (SELECT, UPDATE, DELETE) * **WITH CHECK clause**: Boolean expression checked for new rows (INSERT, UPDATE) ## Canonical Format When generating migration SQL, pgschema produces policies in the following canonical format: ```sql theme={null} CREATE POLICY policy_name ON [schema.]table_name [ FOR command ] [ TO role_name[, ...] ] [ USING expression ] [ WITH CHECK expression ]; ``` **Key characteristics of the canonical format:** * Schema qualification is omitted for tables in the target schema * The AS PERMISSIVE/RESTRICTIVE clause is omitted when PERMISSIVE (the default) * FOR ALL is omitted as it's the default command * Roles are listed in a consistent order * USING and WITH CHECK expressions are preserved exactly as defined * For ALTER operations that only change roles, USING, or WITH CHECK: ```sql theme={null} ALTER POLICY policy_name ON [schema.]table_name [ TO role_name[, ...] ] [ USING expression ] [ WITH CHECK expression ]; ``` * For changes requiring recreation (command or permissive/restrictive changes): ```sql theme={null} DROP POLICY IF EXISTS policy_name ON [schema.]table_name; CREATE POLICY policy_name ON [schema.]table_name ...; ``` ## Row Level Security Policies require Row Level Security to be enabled on the table: ```sql theme={null} -- Enable RLS (policies apply to non-owner users) ALTER TABLE table_name ENABLE ROW LEVEL SECURITY; -- Force RLS (policies apply even to table owners) ALTER TABLE table_name FORCE ROW LEVEL SECURITY; ``` pgschema automatically handles RLS enablement when policies are present and generates the appropriate statements in the migration plan: * `ALTER TABLE ... ENABLE ROW LEVEL SECURITY` - enables RLS for the table * `ALTER TABLE ... DISABLE ROW LEVEL SECURITY` - disables RLS * `ALTER TABLE ... FORCE ROW LEVEL SECURITY` - forces RLS to apply even to table owners * `ALTER TABLE ... NO FORCE ROW LEVEL SECURITY` - removes the force option # CREATE PROCEDURE Source: https://www.pgschema.com/syntax/create_procedure ## Syntax ```sql theme={null} create_procedure ::= CREATE [ OR REPLACE ] PROCEDURE procedure_name ( [ parameter_list ] ) LANGUAGE language_name AS procedure_body procedure_name ::= [schema.]name parameter_list ::= parameter [, ...] parameter ::= [ parameter_mode ] [ parameter_name ] parameter_type [ DEFAULT default_value ] parameter_mode ::= IN | OUT | INOUT language_name ::= plpgsql | sql | c | internal procedure_body ::= 'definition' | $$definition$$ | $tag$definition$tag$ ``` pgschema understands the following `CREATE PROCEDURE` features: * **Schema-qualified names**: Procedures can be defined in specific schemas * **Parameters**: * IN, OUT, INOUT parameter modes * Default parameter values * Named parameters with data types * **Languages**: Any PostgreSQL procedural language (plpgsql, sql, c, internal, etc.) * **Procedure body**: Any valid procedure definition with proper dollar-quoting ## Canonical Format When generating migration SQL, pgschema produces procedures in the following canonical format: ```sql theme={null} CREATE OR REPLACE PROCEDURE [schema.]procedure_name( parameter_name parameter_type[ DEFAULT default_value][, ...] ) LANGUAGE language_name AS $tag$procedure_body$tag$; ``` **Key characteristics of the canonical format:** * Always uses `CREATE OR REPLACE PROCEDURE` for modifications * Parameters are formatted with line breaks for readability when multiple parameters exist * Uses intelligent dollar-quoting with automatic tag generation to avoid conflicts with procedure body content * For DROP operations: `DROP PROCEDURE IF EXISTS procedure_name(parameter_types);` * The parameter\_types in DROP statements contain only the data types, not parameter names # CREATE SEQUENCE Source: https://www.pgschema.com/syntax/create_sequence ## Syntax ```sql theme={null} create_sequence ::= CREATE SEQUENCE [ IF NOT EXISTS ] sequence_name [ AS data_type ] [ INCREMENT [ BY ] increment ] [ MINVALUE minvalue | NO MINVALUE ] [ MAXVALUE maxvalue | NO MAXVALUE ] [ START [ WITH ] start ] [ CACHE cache ] [ [ NO ] CYCLE ] [ OWNED BY { table_name.column_name | NONE } ] sequence_name ::= [schema.]name data_type ::= smallint | integer | bigint increment ::= integer minvalue ::= integer maxvalue ::= integer start ::= integer cache ::= integer ``` pgschema understands the following `CREATE SEQUENCE` features: * **Schema-qualified names**: Sequences can be defined in specific schemas * **Data types**: bigint (default), integer, smallint * **START WITH**: Initial value of the sequence (default: 1) * **INCREMENT BY**: Step value for sequence increments (default: 1) * **MINVALUE/NO MINVALUE**: Minimum value for the sequence (default: 1 for ascending sequences) * **MAXVALUE/NO MAXVALUE**: Maximum value for the sequence (default: 2^63-1 for bigint) * **CYCLE/NO CYCLE**: Whether the sequence should wrap around when reaching min/max values * **OWNED BY**: Associates the sequence with a table column for automatic cleanup ## Canonical Format When generating migration SQL, pgschema produces sequences in the following canonical format: ```sql theme={null} CREATE SEQUENCE IF NOT EXISTS [schema.]sequence_name [ START WITH start_value ] [ INCREMENT BY increment_value ] [ MINVALUE min_value ] [ MAXVALUE max_value ] [ CYCLE ]; ``` **Key characteristics of the canonical format:** * Always uses `IF NOT EXISTS` for CREATE operations * Only includes parameters that differ from PostgreSQL defaults: * START WITH is included if not 1 * INCREMENT BY is included if not 1 * MINVALUE is included if not 1 * MAXVALUE is included if not the data type maximum (9223372036854775807 for bigint) * CYCLE is only included if enabled (default is NO CYCLE) * For ALTER operations: `ALTER SEQUENCE sequence_name parameter_changes;` * For DROP operations: `DROP SEQUENCE IF EXISTS sequence_name CASCADE;` * CACHE specifications are included when they differ from the default (1) * Data type specifications (AS bigint/integer/smallint) are included when explicitly specified * OWNED BY relationships are tracked but managed separately through column dependencies # CREATE TABLE Source: https://www.pgschema.com/syntax/create_table ## Syntax ```sql theme={null} create_table ::= CREATE TABLE [ IF NOT EXISTS ] table_name ( [ { column_definition | like_clause } [, ...] ] [ table_constraint [, ...] ] ) [ PARTITION BY { RANGE | LIST | HASH } ( { column_name | ( expression ) } ) ] table_name ::= [schema.]name column_definition ::= column_name data_type [ NOT NULL | NULL ] [ DEFAULT default_value ] [ GENERATED { ALWAYS | BY DEFAULT } AS IDENTITY [ ( identity_option [, ...] ) ] ] [ GENERATED ALWAYS AS ( expression ) STORED ] [ column_constraint [, ...] ] like_clause ::= LIKE source_table [ like_option [...] ] like_option ::= INCLUDING { DEFAULTS | CONSTRAINTS | INDEXES | COMMENTS | ALL } | EXCLUDING { DEFAULTS | CONSTRAINTS | INDEXES | COMMENTS | ALL } column_constraint ::= [ CONSTRAINT constraint_name ] { PRIMARY KEY | UNIQUE | CHECK ( expression ) | REFERENCES referenced_table [ ( referenced_column ) ] [ ON DELETE { CASCADE | RESTRICT | SET NULL | SET DEFAULT } ] [ ON UPDATE { CASCADE | RESTRICT | SET NULL | SET DEFAULT } ] [ DEFERRABLE [ INITIALLY DEFERRED ] ] } table_constraint ::= [ CONSTRAINT constraint_name ] { PRIMARY KEY ( column_name [, ...] ) | UNIQUE ( column_name [, ...] ) | CHECK ( expression ) | FOREIGN KEY ( column_name [, ...] ) REFERENCES referenced_table [ ( referenced_column [, ...] ) ] [ ON DELETE { CASCADE | RESTRICT | SET NULL | SET DEFAULT } ] [ ON UPDATE { CASCADE | RESTRICT | SET NULL | SET DEFAULT } ] [ DEFERRABLE [ INITIALLY DEFERRED ] ] } data_type ::= { SMALLINT | INTEGER | BIGINT | DECIMAL | NUMERIC | REAL | DOUBLE PRECISION | SMALLSERIAL | SERIAL | BIGSERIAL | VARCHAR [ ( length ) ] | CHAR [ ( length ) ] | CHARACTER [ ( length ) ] | CHARACTER VARYING [ ( length ) ] | TEXT | DATE | TIME | TIMESTAMP | TIMESTAMPTZ | TIMETZ | BOOLEAN | UUID | JSONB | JSON | user_defined_type | ARRAY } identity_option ::= { START WITH start_value | INCREMENT BY increment_value | MAXVALUE max_value | NO MAXVALUE | MINVALUE min_value | NO MINVALUE | CYCLE | NO CYCLE } default_value ::= literal | expression | function_call ``` pgschema understands the following `CREATE TABLE` features: * **Schema-qualified names**: Tables can be created in specific schemas * **Columns**: * All PostgreSQL data types including user-defined types * NULL/NOT NULL constraints * DEFAULT values with expressions and function calls * IDENTITY columns with GENERATED ALWAYS or BY DEFAULT * Generated columns with GENERATED ALWAYS AS (expression) STORED * Serial types (SMALLSERIAL, SERIAL, BIGSERIAL) * **LIKE clause**: * Copy column definitions from another table * INCLUDING DEFAULTS, CONSTRAINTS, INDEXES, COMMENTS, or ALL * EXCLUDING options to omit specific elements * **Constraints**: * PRIMARY KEY (single or composite) * UNIQUE constraints (single or composite) * FOREIGN KEY references with referential actions (CASCADE, RESTRICT, SET NULL, SET DEFAULT) * CHECK constraints with arbitrary expressions * DEFERRABLE constraints with INITIALLY DEFERRED option * **Partitioning**: PARTITION BY RANGE, LIST, or HASH * **Row-level security**: RLS policies (handled separately) * **Indexes**: Created via separate CREATE INDEX statements * **Triggers**: Created via separate CREATE TRIGGER statements ## Canonical Format When dumping or creating tables, pgschema produces tables in the following canonical format. **All constraints** (PRIMARY KEY, UNIQUE, FOREIGN KEY, CHECK) are defined as **separate named table-level constraints** for consistency and explicit naming. **Note**: When generating migration SQL for `ALTER TABLE ADD COLUMN` operations, single-column constraints may be inlined to reduce the number of statements (see ALTER TABLE documentation). ### Simple Table ```sql theme={null} CREATE TABLE IF NOT EXISTS users ( id SERIAL, email VARCHAR(255) NOT NULL, age INTEGER, created_at TIMESTAMP DEFAULT now(), CONSTRAINT users_pkey PRIMARY KEY (id), CONSTRAINT users_email_key UNIQUE (email), CONSTRAINT users_age_check CHECK (age >= 0) ); ``` ### Complex Table ```sql theme={null} CREATE TABLE IF NOT EXISTS order_items ( order_id INTEGER NOT NULL, item_id INTEGER NOT NULL, supplier_id INTEGER, region VARCHAR(50), position INTEGER, quantity INTEGER DEFAULT 1, unit_price DECIMAL(10,2) NOT NULL, discount_percent DECIMAL(5,2) DEFAULT 0.00, created_at TIMESTAMP NOT NULL DEFAULT now(), updated_at TIMESTAMP, CONSTRAINT pk_order_item PRIMARY KEY (order_id, item_id), CONSTRAINT uk_order_item_position UNIQUE (order_id, position), CONSTRAINT order_items_order_id_fkey FOREIGN KEY (order_id) REFERENCES orders(id) ON DELETE CASCADE, CONSTRAINT order_items_item_id_fkey FOREIGN KEY (item_id) REFERENCES items(id), CONSTRAINT fk_order_item_supplier FOREIGN KEY (supplier_id, region) REFERENCES suppliers(id, region) ON DELETE SET NULL ON UPDATE CASCADE, CONSTRAINT order_items_quantity_check CHECK (quantity > 0), CONSTRAINT order_items_discount_percent_check CHECK (discount_percent >= 0 AND discount_percent <= 100), CONSTRAINT chk_valid_dates CHECK (created_at <= updated_at), CONSTRAINT chk_pricing CHECK (unit_price >= 0 AND (unit_price * quantity * (1 - discount_percent/100)) >= 0) ) PARTITION BY HASH (order_id); ``` ### Identity and Serial Columns ```sql theme={null} CREATE TABLE IF NOT EXISTS products ( id BIGSERIAL, uuid UUID DEFAULT gen_random_uuid(), version INTEGER GENERATED ALWAYS AS IDENTITY, name VARCHAR(255) NOT NULL, sku VARCHAR(50), price DECIMAL(10,2) DEFAULT 0.00, CONSTRAINT products_pkey PRIMARY KEY (id), CONSTRAINT products_sku_key UNIQUE (sku) ); ``` **Note on SERIAL types**: pgschema automatically detects and normalizes SERIAL columns (integer types with `nextval()` defaults) to their canonical SERIAL forms (SMALLSERIAL, SERIAL, BIGSERIAL) in CREATE TABLE statements. ### Constraint Handling Rules **CREATE TABLE Format** - All constraints are defined as separate named table-level constraints: * **PRIMARY KEY**: Both single-column and composite (`CONSTRAINT table_pkey PRIMARY KEY (col1, ...)`) * **UNIQUE**: Both single-column and composite (`CONSTRAINT table_col_key UNIQUE (col1, ...)`) * **FOREIGN KEY**: Both single-column and composite (`CONSTRAINT table_col_fkey FOREIGN KEY (col1, ...) REFERENCES...`) * **CHECK**: All CHECK constraints (`CONSTRAINT table_col_check CHECK (expression)`) **ALTER TABLE ADD COLUMN Format** (migration optimization) - Single-column constraints may be inlined: * **PRIMARY KEY**: `ALTER TABLE t ADD COLUMN id INTEGER CONSTRAINT t_pkey PRIMARY KEY` * **UNIQUE**: `ALTER TABLE t ADD COLUMN email TEXT CONSTRAINT t_email_key UNIQUE` * **FOREIGN KEY**: `ALTER TABLE t ADD COLUMN user_id INTEGER CONSTRAINT t_user_id_fkey REFERENCES users(id)` * **CHECK**: `ALTER TABLE t ADD COLUMN age INTEGER CONSTRAINT t_age_check CHECK (age > 0)` * Composite constraints still require separate ALTER TABLE ADD CONSTRAINT statements **Column Element Order** - When defining columns: 1. Column name and data type 2. NOT NULL (omitted if PRIMARY KEY, IDENTITY, or SERIAL) 3. DEFAULT value (omitted for SERIAL/IDENTITY columns) 4. GENERATED...AS IDENTITY **Constraint Naming Conventions** - Auto-generated constraint names follow PostgreSQL conventions: * PRIMARY KEY: `{table}_pkey` * UNIQUE: `{table}_{column}_key` * FOREIGN KEY: `{table}_{column}_fkey` * CHECK: `{table}_{column}_check` **Key characteristics of the canonical format:** * Always uses `CREATE TABLE IF NOT EXISTS` for safety and idempotency * **All constraints are table-level with explicit names** in CREATE TABLE output * Constraints are ordered: PRIMARY KEY, UNIQUE, FOREIGN KEY, CHECK * Column elements appear in order: data\_type, NOT NULL, DEFAULT, GENERATED...AS IDENTITY * SERIAL types are normalized to uppercase canonical forms (SMALLSERIAL, SERIAL, BIGSERIAL) * Data types include appropriate precision/scale modifiers (e.g., VARCHAR(255), NUMERIC(10,2)) * Schema prefixes are stripped from user-defined types when they match the target schema * Foreign key constraints include full referential action specifications when present * IDENTITY columns preserve their generation mode and options * NOT NULL is omitted for PRIMARY KEY, IDENTITY, and SERIAL columns (implicit) * DEFAULT is omitted for SERIAL and IDENTITY columns * CHECK constraints are simplified to developer-friendly format (e.g., `IN('val1', 'val2')` instead of `= ANY (ARRAY[...])`) * Partitioned tables include the PARTITION BY clause * Proper indentation with 4 spaces for readability * All table creation operations can run within transactions * For DROP operations: `DROP TABLE IF EXISTS table_name CASCADE;` # CREATE TRIGGER Source: https://www.pgschema.com/syntax/create_trigger ## Syntax ```sql theme={null} create_trigger ::= CREATE [ OR REPLACE ] [ CONSTRAINT ] TRIGGER trigger_name { BEFORE | AFTER | INSTEAD OF } trigger_event [ OR ... ] ON table_name [ NOT DEFERRABLE | DEFERRABLE [ INITIALLY { IMMEDIATE | DEFERRED } ] ] [ REFERENCING { OLD TABLE [ AS ] old_table_name | NEW TABLE [ AS ] new_table_name } [...] ] [ FOR [ EACH ] { ROW | STATEMENT } ] [ WHEN ( condition ) ] EXECUTE FUNCTION function_name ( ) trigger_name ::= name trigger_event ::= INSERT | UPDATE [ OF column_name [, ...] ] | DELETE | TRUNCATE table_name ::= [schema.]name condition ::= expression function_name ::= [schema.]name ``` pgschema understands the following `CREATE TRIGGER` features: * **Trigger timing**: * BEFORE - executes before the triggering event * AFTER - executes after the triggering event * INSTEAD OF - replaces the triggering event (views only) * **Trigger events**: * INSERT, UPDATE, DELETE, TRUNCATE * Multiple events can be combined with OR * UPDATE OF column\_name - fires only when specific columns are updated * **Trigger level**: * FOR EACH ROW - fires once for each affected row (default) * FOR EACH STATEMENT - fires once for the entire SQL statement * **WHEN condition**: Optional condition that must be true for trigger to fire * Supports IS DISTINCT FROM / IS NOT DISTINCT FROM comparisons * Can reference OLD and NEW row values * **Constraint triggers**: * CREATE CONSTRAINT TRIGGER for triggers that can be deferred * DEFERRABLE with INITIALLY IMMEDIATE or INITIALLY DEFERRED * **Transition tables**: * REFERENCING OLD TABLE AS name - captures deleted/old rows for statement-level triggers * REFERENCING NEW TABLE AS name - captures inserted/new rows for statement-level triggers * **Schema-qualified names**: Tables and functions can be in specific schemas * **Function execution**: Triggers execute functions that return TRIGGER type ## Canonical Format When generating migration SQL, pgschema produces triggers in the following canonical format: ```sql theme={null} CREATE OR REPLACE TRIGGER trigger_name { BEFORE | AFTER | INSTEAD OF } { INSERT | UPDATE | DELETE | TRUNCATE } [ OR ... ] ON [schema.]table_name FOR EACH { ROW | STATEMENT } [ WHEN (condition) ] EXECUTE FUNCTION function_name(); ``` **Key characteristics of the canonical format:** * Always uses `CREATE OR REPLACE TRIGGER` for modifications * Events are ordered consistently: INSERT, UPDATE, DELETE, TRUNCATE * Multi-line formatting with proper indentation for readability * Explicit FOR EACH clause (ROW is default if not specified) * WHEN conditions are properly parenthesized * Function names include parentheses even when no arguments * Schema qualifiers only included when table is in different schema than target * For DROP operations: `DROP TRIGGER IF EXISTS trigger_name ON table_name;` # CREATE TYPE Source: https://www.pgschema.com/syntax/create_type ## Syntax ```sql theme={null} create_type ::= CREATE TYPE type_name AS enum_type | CREATE TYPE type_name AS composite_type enum_type ::= ENUM ( [ enum_value [, ...] ] ) composite_type ::= ( attribute_definition [, ...] ) type_name ::= [schema.]name enum_value ::= 'value' attribute_definition ::= attribute_name data_type ``` pgschema understands the following `CREATE TYPE` features: * **Schema-qualified names**: Types can be defined in specific schemas * **ENUM types**: User-defined enumeration types * Empty enums (no values) * Single-quoted string values * Multiple values separated by commas * **Composite types**: Row types with named attributes * Multiple attributes with their data types * Any valid PostgreSQL data type for attributes * Named attributes for structured data ## Canonical Format When generating migration SQL, pgschema produces types in the following canonical format: ```sql theme={null} -- ENUM type with no values CREATE TYPE [schema.]type_name AS ENUM (); -- ENUM type with values (multi-line format) CREATE TYPE [schema.]type_name AS ENUM ( 'value1', 'value2', 'value3' ); -- Composite type (single line) CREATE TYPE [schema.]type_name AS (attribute1 data_type, attribute2 data_type); ``` **Key characteristics of the canonical format:** * Uses single-line format for empty enums * Uses multi-line format with indentation for enums with values * Each enum value is on its own indented line for readability * No comma after the last enum value * Composite types use single-line format with attributes separated by commas * For DROP operations: `DROP TYPE IF EXISTS type_name RESTRICT;` * ENUM types can be modified by adding values with `ALTER TYPE type_name ADD VALUE 'new_value' AFTER 'existing_value';` **Note**: Composite type modifications in pgschema currently require dropping and recreating the type, as PostgreSQL has limited ALTER TYPE support for composite types and complex dependencies make in-place modifications challenging. # CREATE VIEW Source: https://www.pgschema.com/syntax/create_view ## Syntax ```sql theme={null} create_view ::= CREATE [ OR REPLACE ] VIEW [ IF NOT EXISTS ] view_name AS select_statement view_name ::= [schema.]name select_statement ::= SELECT ... ``` pgschema understands the following `CREATE VIEW` features: * **Schema-qualified names**: Views can be defined in specific schemas * **OR REPLACE**: Optional clause to replace an existing view definition * **AS clause**: Any valid SELECT statement that defines the view's contents * **View dependencies**: Proper handling of view-to-view dependencies ## Canonical Format When generating migration SQL, pgschema produces views in the following canonical format: ```sql theme={null} CREATE OR REPLACE VIEW [schema.]view_name AS select_statement; ``` **Key characteristics of the canonical format:** * Always uses `CREATE OR REPLACE VIEW` for creation and modifications * Schema qualification included when necessary * Preserves the original SELECT statement formatting * For DROP operations: `DROP VIEW IF EXISTS view_name CASCADE;` **Note on modifications:** Regular views support `CREATE OR REPLACE`, allowing seamless updates to view definitions without dropping dependent objects. pgschema leverages this feature for efficient view migrations. # GRANT / REVOKE Source: https://www.pgschema.com/syntax/grant_revoke ## Syntax ```sql theme={null} grant_statement ::= GRANT privilege_list ON object_type object_name TO grantee [ WITH GRANT OPTION ] | GRANT column_privilege_list ON TABLE table_name TO grantee revoke_statement ::= REVOKE [ GRANT OPTION FOR ] privilege_list ON object_type object_name FROM grantee | REVOKE column_privilege_list ON TABLE table_name FROM grantee privilege_list ::= privilege [, ...] privilege ::= SELECT | INSERT | UPDATE | DELETE | TRUNCATE | REFERENCES | TRIGGER -- for tables | USAGE | SELECT | UPDATE -- for sequences | EXECUTE -- for functions/procedures | USAGE -- for types/domains column_privilege_list ::= column_privilege [, ...] column_privilege ::= SELECT ( column_name [, ...] ) | INSERT ( column_name [, ...] ) | UPDATE ( column_name [, ...] ) | REFERENCES ( column_name [, ...] ) object_type ::= TABLE | SEQUENCE | FUNCTION | PROCEDURE | TYPE object_name ::= [schema.]name | [schema.]function_name(argument_types) -- for functions/procedures grantee ::= role_name | PUBLIC ``` pgschema understands the following `GRANT`/`REVOKE` features: * **Table privileges**: SELECT, INSERT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER * **Column-level privileges**: SELECT, INSERT, UPDATE, REFERENCES on specific columns * **Sequence privileges**: USAGE, SELECT, UPDATE * **Function/Procedure privileges**: EXECUTE (with function signature) * **Type/Domain privileges**: USAGE * **WITH GRANT OPTION**: Allow grantee to grant the same privileges to others * **REVOKE GRANT OPTION FOR**: Revoke only the grant option while keeping the privilege * **PUBLIC**: Special grantee representing all roles ## Examples ### Grant table privileges ```sql theme={null} -- Grant SELECT to a role GRANT SELECT ON users TO readonly_role; -- Grant multiple privileges GRANT SELECT, INSERT, UPDATE, DELETE ON orders TO app_role; -- Grant with ability to re-grant GRANT SELECT ON products TO admin_role WITH GRANT OPTION; ``` ### Grant column-level privileges ```sql theme={null} -- Grant SELECT on specific columns only GRANT SELECT (id, username, email) ON users TO readonly_role; -- Grant UPDATE on specific columns GRANT UPDATE (status, updated_at) ON orders TO app_role; -- Combine table-level and column-level grants GRANT SELECT ON users TO readonly_role; GRANT UPDATE (email) ON users TO readonly_role; ``` ### Grant function privileges ```sql theme={null} -- Grant EXECUTE on a function (requires full signature) GRANT EXECUTE ON FUNCTION calculate_total(integer, numeric) TO api_role; -- Revoke default PUBLIC access to a function REVOKE EXECUTE ON FUNCTION get_user_data(integer) FROM PUBLIC; ``` ### Grant sequence privileges ```sql theme={null} -- Grant USAGE and SELECT on a sequence GRANT USAGE, SELECT ON SEQUENCE order_id_seq TO app_role; ``` ### Grant type privileges ```sql theme={null} -- Grant USAGE on a domain type GRANT USAGE ON TYPE email_address TO app_role; ``` ### Alter privileges ```sql theme={null} -- Change privileges: revoke INSERT, add UPDATE and DELETE -- Before: GRANT SELECT, INSERT ON inventory TO app_role; -- After: GRANT SELECT, UPDATE, DELETE ON inventory TO app_role; -- Migration generates: REVOKE INSERT ON TABLE inventory FROM app_role; GRANT DELETE, UPDATE ON TABLE inventory TO app_role; ``` ### Revoke grant option ```sql theme={null} -- Revoke only the grant option, keeping the privilege -- Before: GRANT SELECT ON employees TO manager_role WITH GRANT OPTION; -- After: GRANT SELECT ON employees TO manager_role; -- Migration generates: REVOKE GRANT OPTION FOR SELECT ON TABLE employees FROM manager_role; ``` ## Canonical Format When generating migration SQL, pgschema produces privileges in the following canonical format: ```sql theme={null} -- For granting GRANT privilege_list ON object_type object_name TO grantee[ WITH GRANT OPTION]; -- For revoking REVOKE privilege_list ON object_type object_name FROM grantee; -- For revoking only grant option REVOKE GRANT OPTION FOR privilege_list ON object_type object_name FROM grantee; ``` **Key characteristics of the canonical format:** * Always includes explicit `TABLE` keyword for table privileges * Privileges are sorted alphabetically (e.g., `DELETE, INSERT, SELECT, UPDATE`) * Function signatures include parameter names when available * Each privilege change is a separate statement (no combining multiple roles) # Unsupported Source: https://www.pgschema.com/syntax/unsupported `pgschema` operates at the schema level. This means it does not support SQL commands that operate the non-schema level objects. ## Cluster Level Commands * `CREATE DATABASE` * `CREATE ROLE` * `CREATE TABLESPACE` * `CREATE USER` ## Database Level Commands * `CREATE CAST` * `CREATE COLLATION` * `CREATE CONVERSION` * `CREATE EVENT TRIGGER` * `CREATE EXTENSION` * `CREATE FOREIGN DATA WRAPPER` * `CREATE LANGUAGE` * `CREATE OPERATOR` * `CREATE PUBLICATION` * `CREATE SCHEMA` * `CREATE SERVER` * `CREATE SUBSCRIPTION` * `CREATE TEXT SEARCH` * `CREATE USER MAPPING` ## Schema Level Commands We plan to support most schema-level commands in the future. Please check other sections for the schema-level database objects that we support right now. ## Rename `RENAME` is not supported. # GitOps Source: https://www.pgschema.com/workflow/gitops pgschema integrates seamlessly with GitOps workflows, providing a declarative approach to database schema management that fits naturally into your existing CI/CD pipelines. The typical workflow follows a review-then-apply pattern that ensures all schema changes are properly reviewed before being applied to production. * Developers modify schema files in their feature branch * Schema changes are committed alongside application code * The desired state is expressed declaratively in SQL files * When a PR/MR is opened, CI automatically runs [pgschema plan](/cli/plan) * The plan output shows exactly what changes will be applied * The plan is posted as a comment on the PR/MR for review * Reviewers can see: * What tables, indexes, functions will be added/modified/dropped * The exact SQL statements that will be executed * Whether changes can run in a transaction * Any potentially destructive operations * Once reviewers approve the schema changes * The PR/MR is merged to the target branch (e.g., main, production) * After merge, CD pipeline automatically runs [pgschema apply](/cli/apply) * The apply command uses the reviewed plan to execute migrations * Schema changes are applied to the target database * The deployment includes safety checks for concurrent modifications ## GitHub Actions Complete example repository with workflows and sample PRs **Single-file Schema Workflow ** Generates migration plan on pull requests Applies migrations after merge Live example with plan comments **Modular Multi-file Schema Workflow ** Handles modular schema files Applies modular migrations Multi-file schema changes **Example GitHub Actions Workflows** ```yaml .github/workflows/pgschema-plan.yml theme={null} name: Schema Migration Plan on: pull_request: paths: - 'schema/schema.sql' jobs: plan: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 with: fetch-depth: 0 - uses: actions/setup-go@v5 with: go-version: '1.24' - name: Install pgschema run: go install github.com/pgplex/pgschema@latest - name: Generate migration plan env: PGPASSWORD: ${{ secrets.DB_PASSWORD }} run: | pgschema plan \ --host ${{ secrets.DB_HOST }} \ --port ${{ secrets.DB_PORT }} \ --db ${{ secrets.DB_NAME }} \ --user ${{ secrets.DB_USER }} \ --file schema/schema.sql \ --output-json plan.json \ --output-human plan.txt - name: Comment PR with plan uses: actions/github-script@v7 with: script: | const fs = require('fs'); const planContent = fs.readFileSync('plan.txt', 'utf8'); // Find existing comment const { data: comments } = await github.rest.issues.listComments({ owner: context.repo.owner, repo: context.repo.repo, issue_number: context.issue.number, }); const botComment = comments.find(comment => comment.user.type === 'Bot' && comment.body.includes('## Migration Plan') ); const body = `## Migration Plan\n\`\`\`sql\n${planContent}\n\`\`\``; if (botComment) { await github.rest.issues.updateComment({ owner: context.repo.owner, repo: context.repo.repo, comment_id: botComment.id, body }); } else { await github.rest.issues.createComment({ owner: context.repo.owner, repo: context.repo.repo, issue_number: context.issue.number, body }); } - name: Upload plan artifact uses: actions/upload-artifact@v4 with: name: pgschema-plan-${{ github.event.pull_request.number }} path: plan.json retention-days: 30 ``` ```yaml .github/workflows/pgschema-apply.yml theme={null} name: Apply Schema Migration on: pull_request: types: [closed] branches: - main paths: - 'schema/schema.sql' jobs: apply: runs-on: ubuntu-latest if: github.event.pull_request.merged == true environment: production steps: - uses: actions/checkout@v4 - uses: actions/setup-go@v5 with: go-version: '1.24' - name: Install pgschema run: go install github.com/pgplex/pgschema@latest - name: Download plan artifact uses: dawidd6/action-download-artifact@v6 with: workflow: pgschema-plan.yml pr: ${{ github.event.pull_request.number }} name: pgschema-plan-${{ github.event.pull_request.number }} path: . - name: Validate plan exists run: | if [ ! -f "plan.json" ]; then echo "Error: plan.json not found" exit 1 fi - name: Apply migration id: apply env: PGPASSWORD: ${{ secrets.DB_PASSWORD }} run: | echo "Applying migration plan from PR #${{ github.event.pull_request.number }}" # Capture apply output APPLY_OUTPUT=$(pgschema apply \ --host ${{ secrets.DB_HOST }} \ --port ${{ secrets.DB_PORT }} \ --db ${{ secrets.DB_NAME }} \ --user ${{ secrets.DB_USER }} \ --plan plan.json \ --auto-approve \ 2>&1) APPLY_EXIT_CODE=$? echo "Apply output:" echo "$APPLY_OUTPUT" # Set output for the next step echo "output<> $GITHUB_OUTPUT echo "$APPLY_OUTPUT" >> $GITHUB_OUTPUT echo "EOF" >> $GITHUB_OUTPUT exit $APPLY_EXIT_CODE - name: Comment on PR uses: actions/github-script@v7 with: script: | await github.rest.issues.createComment({ owner: context.repo.owner, repo: context.repo.repo, issue_number: context.issue.number, body: '✅ Schema migration has been successfully applied to production!' }); ``` ## GitLab CI Demonstrates a complete workflow for reviewing and applying schema changes through merge requests. ```yaml .gitlab-ci.yml theme={null} stages: - plan - apply variables: PGPASSWORD: $DB_PASSWORD # Plan job - runs on merge requests and main branch pgschema-plan: stage: plan image: golang:1.24 script: - go install github.com/pgplex/pgschema@latest - | pgschema plan \ --host $DB_HOST \ --port ${DB_PORT:-5432} \ --db $DB_NAME \ --user $DB_USER \ --file schema/schema.sql \ --output-json plan.json \ --output-human plan.txt - cat plan.txt # Post plan as MR comment (only for merge requests) - | if [ -n "$CI_MERGE_REQUEST_IID" ]; then PLAN_CONTENT=$(cat plan.txt) curl --request POST \ --header "PRIVATE-TOKEN: $CI_JOB_TOKEN" \ --data-urlencode "body=## Migration Plan\n\`\`\`\n$PLAN_CONTENT\n\`\`\`" \ "$CI_API_V4_URL/projects/$CI_PROJECT_ID/merge_requests/$CI_MERGE_REQUEST_IID/notes" fi artifacts: paths: - plan.json - plan.txt expire_in: 7 days rules: - if: '$CI_PIPELINE_SOURCE == "merge_request_event"' changes: - schema/** - if: '$CI_COMMIT_BRANCH == "main"' changes: - schema/** # Apply job - manual trigger on main branch pgschema-apply: stage: apply image: golang:1.24 needs: ["pgschema-plan"] dependencies: - pgschema-plan script: - go install github.com/pgplex/pgschema@latest - | if [ ! -f plan.json ]; then echo "Error: plan.json not found" exit 1 fi - | pgschema apply \ --host $DB_HOST \ --port ${DB_PORT:-5432} \ --db $DB_NAME \ --user $DB_USER \ --plan plan.json \ --auto-approve - echo "✅ Schema migration applied successfully" when: manual # Require manual approval environment: name: production rules: - if: '$CI_COMMIT_BRANCH == "main"' changes: - schema/** ``` ## Azure Pipelines Demonstrates a complete workflow for reviewing and applying schema changes through pull requests. ```yaml azure-pipelines.yml theme={null} trigger: branches: include: - main paths: include: - schema/* pr: branches: include: - main paths: include: - schema/* variables: PGPASSWORD: $(DB_PASSWORD) stages: - stage: Plan displayName: 'Generate Migration Plan' jobs: - job: PlanJob displayName: 'Run pgschema plan' pool: vmImage: 'ubuntu-latest' steps: - task: GoTool@0 inputs: version: '1.24' - script: | go install github.com/pgplex/pgschema@latest displayName: 'Install pgschema' - script: | pgschema plan \ --host $(DB_HOST) \ --port $(DB_PORT) \ --db $(DB_NAME) \ --user $(DB_USER) \ --file schema/schema.sql \ --output-json plan.json \ --output-human plan.txt displayName: 'Generate plan' - script: cat plan.txt displayName: 'Display plan' # Post plan as PR comment (for PRs only) - task: GitHubComment@0 condition: eq(variables['Build.Reason'], 'PullRequest') inputs: gitHubConnection: 'github-connection' repositoryName: '$(Build.Repository.Name)' comment: | ## Migration Plan
$(cat plan.txt)
- publish: plan.json artifact: migration-plan displayName: 'Publish plan artifact' - stage: Apply displayName: 'Apply Migration' dependsOn: Plan condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main')) jobs: - deployment: ApplyJob displayName: 'Apply pgschema migration' pool: vmImage: 'ubuntu-latest' environment: 'production' strategy: runOnce: deploy: steps: - task: GoTool@0 inputs: version: '1.24' - script: | go install github.com/pgplex/pgschema@latest displayName: 'Install pgschema' - download: current artifact: migration-plan displayName: 'Download plan artifact' - script: | if [ ! -f $(Pipeline.Workspace)/migration-plan/plan.json ]; then echo "Error: plan.json not found" exit 1 fi displayName: 'Verify plan exists' - script: | pgschema apply \ --host $(DB_HOST) \ --port $(DB_PORT) \ --db $(DB_NAME) \ --user $(DB_USER) \ --plan $(Pipeline.Workspace)/migration-plan/plan.json \ --auto-approve displayName: 'Apply migration' - script: echo "✅ Schema migration applied successfully" displayName: 'Report success' ``` # Local to Production Source: https://www.pgschema.com/workflow/local-to-production This workflow guides you through promoting schema changes across your environment pipeline. Start by developing and testing migrations locally, then systematically deploy them through dev, UAT, and staging environments before reaching production. Each environment serves as a validation checkpoint, ensuring your schema changes are safe and reliable before they impact your production database. Work on your feature in your development environment, making database changes as needed. ```bash theme={null} # Make database changes, add tables, modify columns, etc. # Test your application with the new schema ``` Once your changes are complete and tested, dump the development database schema. ```bash theme={null} pgschema dump --host dev.db.com --db myapp --user dev > schema.sql ``` This creates a `schema.sql` file containing your complete schema definition. Add the schema file to your repository and create a pull request. ```bash theme={null} git add schema.sql git commit -m "feat: add user profiles and audit logging" git push origin feature/user-profiles # Create a pull request gh pr create --title "feat: add user profiles and audit logging" --body "Schema changes for user profiles feature" ``` The schema file serves as both documentation and the source of truth for migrations. In your CI pipeline, you would automatically generate a `plan.json` as well as the human readable plan by running `pgschema plan` against your staging environment to show reviewers what changes will be applied. ```bash theme={null} pgschema plan --host staging.db.com --db myapp --user staging \ --file schema.sql --output-human stdout --output-json plan.json ``` This `plan.json` will later be used to deploy to both staging and production. Review the plan carefully to ensure the changes match your expectations. After code review and approval, merge your changes to the main branch. ```bash theme={null} git checkout main git pull ``` Both `schema.sql` and `plan.json` would be merged together and are now ready for deployment. Apply the schema changes to your staging environment using the plan.json. ```bash theme={null} pgschema apply --host staging.db.com --db myapp --user staging --plan plan.json ``` If there have been concurrent schema changes since the plan was created, the apply will fail safely due to fingerprinting. In this case, regenerate the plan and try again. After successful staging validation, apply changes to production using the same plan.json. ```bash theme={null} # Apply with plan (add [`--auto-approve`](/cli/apply#param-auto-approve) for automated deployments) pgschema apply --host prod.db.com --db myapp --user prod --plan plan.json ``` If the production schema has drifted from the staging schema (manual changes, hotfixes, etc.), the fingerprint will fail and the apply will be rejected. This prevents applying changes to an unexpected schema state. # Modular Schema Files Source: https://www.pgschema.com/workflow/modular-schema-files For large schemas and teams, managing your database schema as a single file becomes unwieldy. The [`dump --multi-file`](/cli/dump#param-multi-file) option allows you to split your schema into modular files organized by database objects, enabling better collaboration, clearer ownership, and easier maintenance. This approach is ideal for: * Large schemas with hundreds of tables, views, and functions * Teams where different developers/teams own different parts of the schema * Schemas that need granular code review processes ## Workflow ### Step 1: Initialize Multi-File Structure Set up your schema directory structure and dump your existing schema into multiple files: ```bash theme={null} # Create schema directory mkdir -p schema # Dump existing database schema into multiple files pgschema dump --host localhost --db myapp --user postgres \ --multi-file --file schema/main.sql # Examine the generated structure tree schema/ # schema/ # ├── main.sql # Main entry file with \i directives # ├── tables/ # │ ├── users.sql # │ ├── orders.sql # │ └── products.sql # ├── views/ # │ ├── user_orders.sql # │ └── product_stats.sql # ├── functions/ # │ ├── calculate_discount.sql # │ └── update_inventory.sql # └── indexes/ # ├── idx_users_email.sql # └── idx_orders_date.sql # View the main.sql file cat schema/main.sql # \i tables/users.sql # \i tables/orders.sql # \i tables/products.sql # \i views/user_orders.sql # \i views/product_stats.sql # \i functions/calculate_discount.sql # \i functions/update_inventory.sql # \i indexes/idx_users_email.sql # \i indexes/idx_orders_date.sql # Create CODEOWNERS file for GitHub to define team ownership cat > .github/CODEOWNERS << 'EOF' schema/tables/users* @user-team schema/tables/orders* @orders-team schema/tables/products* @inventory-team schema/views/user_* @user-team schema/views/order_* @orders-team schema/functions/*inventory* @inventory-team EOF ``` Each database object is now in its own file, organized by object type for easy navigation and maintenance. ### Step 2: Development Workflow Use the multi-file structure to define the desired state of your schema components: ```bash theme={null} # Edit schema files to define the desired state # Example: Update users table to include new column cat > schema/tables/users.sql << 'EOF' CREATE TABLE users ( id SERIAL PRIMARY KEY, email VARCHAR(255) UNIQUE NOT NULL, name VARCHAR(100) NOT NULL, created_at TIMESTAMP DEFAULT NOW(), last_login TIMESTAMP -- New desired column ); EOF # Generate plan to see what changes are needed pgschema plan --host localhost --db myapp --user postgres \ --file schema/main.sql \ --output-json plan.json \ --output-human plan.txt # Review the plan to see migration steps cat plan.txt # Apply changes after review pgschema apply --host localhost --db myapp --user postgres --plan plan.json ``` Each team defines the desired state of their schema components, and pgschema generates the necessary migration steps. ### Step 3: Team Collaboration Teams work on separate branches and test independently before combining changes for production deployment: ```bash theme={null} # Team A develops and tests their changes in isolation git checkout -b feature/user-profile-enhancements # ... modify schema/tables/users.sql and related files ... # Test Team A changes independently pgschema plan --host test-a.db.com --db myapp --user postgres \ --file schema/main.sql --output-json team-a-plan.json pgschema apply --host test-a.db.com --db myapp --user postgres \ --plan team-a-plan.json # Team B develops and tests their changes separately git checkout -b feature/inventory-tracking # ... modify schema/tables/products.sql and related files ... # Test Team B changes independently pgschema plan --host test-b.db.com --db myapp --user postgres \ --file schema/main.sql --output-json team-b-plan.json pgschema apply --host test-b.db.com --db myapp --user postgres \ --plan team-b-plan.json # Combine changes for production deployment git checkout main git merge feature/user-profile-enhancements git merge feature/inventory-tracking # Generate single combined plan for production pgschema plan --host prod.db.com --db myapp --user postgres \ --file schema/main.sql \ --output-json prod_plan.json \ --output-human prod_plan.txt # Review combined plan before production deployment cat prod_plan.txt ``` Each team validates their changes independently in test environments, then combines into a coordinated production deployment. ### Step 4: Production Deployment Deploy schema changes safely to production using the established multi-file workflow: ```bash theme={null} # Generate production deployment plan pgschema plan --host prod.db.com --db myapp --user postgres \ --file schema/main.sql \ --output-json prod_plan.json \ --output-human prod_plan.txt # Review production plan thoroughly less prod_plan.txt # Apply to production (consider maintenance windows) pgschema apply --host prod.db.com --db myapp --user postgres \ --plan prod_plan.json \ --lock-timeout 30s # Verify deployment success mkdir -p verification/ pgschema dump --host prod.db.com --db myapp --user postgres \ --multi-file --file verification/main.sql # Compare entire schema structure recursively diff -r schema/ verification/ # Clean up verification files rm -rf verification/ ``` ## File Organization Strategies ### By Object Type (Default) pgschema generates files organized by database object type: ``` schema/ ├── main.sql # Generated with \i directives ├── tables/ ├── views/ ├── functions/ ├── procedures/ ├── indexes/ ├── triggers/ └── types/ ``` ### Custom Organization You can reorganize files into any structure you prefer and update the `\i` directives in `main.sql`: #### By Business Domain ``` schema/ ├── main.sql # Updated with custom \i paths ├── user_management/ │ ├── users_table.sql │ ├── user_sessions_table.sql │ └── hash_password_function.sql ├── inventory/ │ ├── products_table.sql │ ├── inventory_table.sql │ └── update_stock_procedure.sql └── orders/ ├── orders_table.sql ├── order_items_table.sql └── calculate_total_function.sql ``` Update `main.sql` to match your structure: ```sql theme={null} -- main.sql with custom organization \i user_management/users_table.sql \i user_management/user_sessions_table.sql \i inventory/products_table.sql \i inventory/inventory_table.sql \i orders/orders_table.sql \i orders/order_items_table.sql \i user_management/hash_password_function.sql \i inventory/update_stock_procedure.sql \i orders/calculate_total_function.sql ``` #### Hybrid Approach ``` schema/ ├── main.sql ├── core/ │ ├── tables/ │ └── functions/ ├── user_management/ │ ├── tables/ │ └── views/ └── reporting/ ├── views/ └── materialized_views/ ``` The key is ensuring your `main.sql` file contains the correct `\i` directives that match your chosen file organization. ### Folder Includes The `\i` directive supports including entire folders by adding a trailing slash (`/`). This automatically includes all `.sql` files in the folder in alphabetical order, and recursively processes any subdirectories: ``` schema/ ├── main.sql ├── types/ │ ├── address.sql │ ├── order_status.sql │ └── user_status.sql ├── tables/ │ ├── orders.sql │ └── users.sql └── functions/ ├── auth/ │ └── validate_user.sql ├── calculate_total.sql └── update_timestamp.sql ``` Using folder includes in `main.sql`: ```sql theme={null} -- Include all types (processed alphabetically: address, order_status, user_status) \i types/ -- Include all tables (processed alphabetically: orders, users) \i tables/ -- Include all functions recursively (auth/validate_user, calculate_total, update_timestamp) \i functions/ ``` Key behaviors: * Files are processed in **alphabetical order** by filename * Subdirectories are processed recursively using **depth-first search** * Only `.sql` files are included; other files are ignored * Folder paths must end with `/` to be recognized as folders * Error if folder doesn't exist or if you try to include a file as a folder ### Nested Includes The `\i` directive can be nested, allowing for hierarchical file organization: ``` schema/ ├── main.sql ├── core/ │ ├── init.sql # Includes all core components │ ├── tables/ │ │ ├── users.sql │ │ └── products.sql │ └── functions/ │ └── utilities.sql └── modules/ ├── auth/ │ ├── auth.sql # Includes all auth components │ ├── tables.sql │ └── functions.sql └── reporting/ └── reporting.sql # Includes all reporting components ``` With nested includes: ```sql theme={null} -- main.sql \i core/init.sql \i modules/auth/auth.sql \i modules/reporting/reporting.sql -- core/init.sql \i tables/users.sql \i tables/products.sql \i functions/utilities.sql -- modules/auth/auth.sql \i tables.sql \i functions.sql ``` You can also combine folder includes with nested approaches: ```sql theme={null} -- main.sql - mix folder and file includes \i core/ \i modules/auth/auth.sql \i modules/reporting/ -- core/ folder contains individual files that get included alphabetically -- modules/reporting/ folder gets all files included recursively ``` This approach allows for modular organization where each subsystem manages its own includes. # Online DDL Source: https://www.pgschema.com/workflow/online-ddl `pgschema` picks migration strategies that minimize downtime and blocking when making schema changes to production PostgreSQL databases. ## Index Operations ### Adding Indexes When adding new indexes, `pgschema` automatically uses `CREATE INDEX CONCURRENTLY` to avoid blocking table writes: ```sql theme={null} -- Generated migration CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_users_email_status ON users (email, status DESC); -- pgschema:wait SELECT COALESCE(i.indisvalid, false) as done, CASE WHEN p.blocks_total > 0 THEN p.blocks_done * 100 / p.blocks_total ELSE 0 END as progress FROM pg_class c LEFT JOIN pg_index i ON c.oid = i.indexrelid LEFT JOIN pg_stat_progress_create_index p ON c.oid = p.index_relid WHERE c.relname = 'idx_users_email_status'; ``` The `pgschema:wait` directive blocks the schema migration execution, polls the database to monitor progress, displays the progress to the user, and automatically continues when the operation completes. It tracks: * **Completion status**: Whether the index is valid and ready * **Progress percentage**: Based on `pg_stat_progress_create_index` ### Modifying Indexes When changing existing indexes, `pgschema` uses a safe "create new, drop old" pattern: ```sql theme={null} -- 1. Create new index with temporary name CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_users_email_pgschema_new ON users (email, status); -- 2. Wait for completion with progress monitoring -- pgschema:wait SELECT COALESCE(i.indisvalid, false) as done, CASE WHEN p.blocks_total > 0 THEN p.blocks_done * 100 / p.blocks_total ELSE 0 END as progress FROM pg_class c LEFT JOIN pg_index i ON c.oid = i.indexrelid LEFT JOIN pg_stat_progress_create_index p ON c.oid = p.index_relid WHERE c.relname = 'idx_users_email_pgschema_new'; -- 3. Drop old index (fast operation) DROP INDEX idx_users_email; -- 4. Rename new index to final name ALTER INDEX idx_users_email_pgschema_new RENAME TO idx_users_email; ``` This approach ensures: * No downtime during index creation * Query performance is maintained throughout the migration * Rollback is possible until the old index is dropped ## Constraint Operations **Supported constraint types:** * **Check constraints**: Data validation rules * **Foreign key constraints**: Referential integrity with full options * **NOT NULL constraints**: Column nullability (via check constraint pattern) ### CHECK Constraints Constraints are added using PostgreSQL's `NOT VALID` pattern: ```sql theme={null} -- 1. Add constraint without validation (fast) ALTER TABLE orders ADD CONSTRAINT check_amount_positive CHECK (amount > 0) NOT VALID; -- 2. Validate existing data (may take time, but non-blocking) ALTER TABLE orders VALIDATE CONSTRAINT check_amount_positive; ``` ### Foreign Key Constraints Foreign keys use the same `NOT VALID` pattern: ```sql theme={null} ALTER TABLE employees ADD CONSTRAINT employees_company_fkey FOREIGN KEY (tenant_id, company_id) REFERENCES companies (tenant_id, company_id) ON UPDATE CASCADE ON DELETE RESTRICT DEFERRABLE NOT VALID; ALTER TABLE employees VALIDATE CONSTRAINT employees_company_fkey; ``` ### NOT NULL Constraints Adding `NOT NULL` constraints uses a three-step process: ```sql theme={null} -- 1. Add check constraint (non-blocking) ALTER TABLE users ADD CONSTRAINT email_not_null CHECK (email IS NOT NULL) NOT VALID; -- 2. Validate constraint (ensures data compliance) ALTER TABLE users VALIDATE CONSTRAINT email_not_null; -- 3. Set NOT NULL (fast, since data is already validated) ALTER TABLE users ALTER COLUMN email SET NOT NULL; ``` ## Examples ### Adding a Multi-Column Index **Schema change:** ```sql theme={null} -- old.sql CREATE TABLE users (id int, email text, status text); -- new.sql CREATE TABLE users (id int, email text, status text); CREATE INDEX idx_users_email_status ON users (email, status DESC); ``` **Generated migration:** ```sql theme={null} CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_users_email_status ON users (email, status DESC); -- pgschema:wait SELECT COALESCE(i.indisvalid, false) as done, CASE WHEN p.blocks_total > 0 THEN p.blocks_done * 100 / p.blocks_total ELSE 0 END as progress FROM pg_class c LEFT JOIN pg_index i ON c.oid = i.indexrelid LEFT JOIN pg_stat_progress_create_index p ON c.oid = p.index_relid WHERE c.relname = 'idx_users_email_status'; ``` ### Adding a Foreign Key Constraint **Schema change:** ```sql theme={null} -- old.sql CREATE TABLE orders (id int, customer_id int); -- new.sql CREATE TABLE orders ( id int, customer_id int, CONSTRAINT orders_customer_fkey FOREIGN KEY (customer_id) REFERENCES customers(id) ); ``` **Generated migration:** ```sql theme={null} ALTER TABLE orders ADD CONSTRAINT orders_customer_fkey FOREIGN KEY (customer_id) REFERENCES customers(id) NOT VALID; ALTER TABLE orders VALIDATE CONSTRAINT orders_customer_fkey; ``` For more examples, see the test cases in [testdata/diff/online](https://github.com/pgplex/pgschema/tree/main/testdata/diff/online) which demonstrate online DDL patterns for various PostgreSQL schema operations. # Work with ORM Source: https://www.pgschema.com/workflow/orm pgschema works with a full schema. Either the ORM itself or a custom program is needed to generate that full schema. Benefits over ORM's built-in migrations: 1. [**Concurrent change detection**](/workflow/plan-review-apply#detect-concurrent-changes-with-fingerprinting): Prevents conflicts when multiple developers modify the database 2. [**Online DDL**](/workflow/plan-review-apply#online-ddl): Safe migrations with minimal downtime [Drizzle](https://orm.drizzle.team/) supports exporting the full schema with `drizzle-kit export`: ```bash theme={null} # Edit your Drizzle schema file (e.g., schema.ts) # Make desired changes to tables, columns, indexes, etc. # Export Drizzle schema to SQL drizzle-kit export > desired-schema.sql # Plan migration pgschema plan \ --host localhost \ --db mydb \ --user postgres \ --schema public \ --file desired-schema.sql # Apply migration pgschema apply \ --host localhost \ --db mydb \ --user postgres \ --schema public \ --file desired-schema.sql ``` If the ORM doesn't support exporting full schema, you need to write a small program. Here's an example for [TypeORM](https://github.com/typeorm/typeorm): ```typescript theme={null} // export-full-schema.ts import { DataSource } from "typeorm"; import { entities } from "./entities"; async function exportFullSchema() { const tempDataSource = new DataSource({ type: "postgres", host: "localhost", database: "temp_db", // Use a temporary empty database synchronize: false, logging: false, entities: entities, }); await tempDataSource.initialize(); // This will generate full schema against empty db const sqlInMemory = await tempDataSource.driver .createSchemaBuilder() .log(); sqlInMemory.upQueries.forEach(query => { console.log(query.query + ";"); }); await tempDataSource.destroy(); } // Run the export exportFullSchema(); ``` ```bash theme={null} # Edit your TypeORM schema file (e.g., src/entities/xxx.ts) # Make desired changes to tables, columns, indexes, etc. # Run the export script npx ts-node export-full-schema.ts > desired-schema.sql # Plan migration pgschema plan \ --host localhost \ --db mydb \ --user postgres \ --schema public \ --file desired-schema.sql # Apply migration pgschema apply \ --host localhost \ --db mydb \ --user postgres \ --schema public \ --file desired-schema.sql ``` # Plan, Review, Apply Source: https://www.pgschema.com/workflow/plan-review-apply The Plan-Review-Apply workflow separates change planning from execution, creating a safety gate that prevents unintended database modifications: 1. **Plan** - Generate a detailed migration plan showing exactly what will change 2. **Review** - Examine the plan for correctness, safety, and business impact 3. **Apply** - Execute the reviewed and approved changes This separation allows for thorough validation and team collaboration before making any database changes. ## Workflow ### Step 1: Plan Your Changes Generate a migration plan to see what changes pgschema will make: ```bash theme={null} # Print a human-readable plan pgschema plan --host localhost --db myapp --user postgres --file schema.sql # Or save the plan to a file for review pgschema plan --host localhost --db myapp --user postgres --file schema.sql \ --output-human plan.txt \ --output-json plan.json ``` ### Step 2: Review the Plan Examine the generated plan carefully: ```bash theme={null} # View the saved plan cat plan.txt ``` ### Step 3: Apply the Changes Once the plan is approved, apply the changes: ```bash theme={null} # Apply with interactive confirmation # For automated deployments, use --auto-approve pgschema apply --host localhost --db myapp --user postgres --plan plan.json ``` ## Detect Concurrent Changes with Fingerprinting Fingerprinting ensures that the exact changes you reviewed in the plan are the changes that get applied, maintaining the integrity of the Plan-Review-Apply workflow. pgschema uses a **fingerprinting mechanism** to ensure consistency between the plan and apply phases. When you run `pgschema plan`, it captures a snapshot (fingerprint) of the current database schema state. Later, when you run `pgschema apply`, it verifies that the database hasn't changed since the plan was generated. This prevents scenarios where: * Someone else modified the database between your plan and apply * The database drifted from the expected state * Multiple developers are making concurrent schema changes ### How Fingerprinting Works 1. **During Plan**: pgschema calculates a fingerprint of the current database schema 2. **Plan Storage**: The fingerprint is embedded in the plan ```json theme={null} { ... "source_fingerprint": { "hash": "965b1131737c955e24c7f827c55bd78e4cb49a75adfd04229e0ba297376f5085" }, ... } ``` 3. **During Apply**: pgschema recalculates the database fingerprint and compares it with the stored one 4. **Safety Check**: If fingerprints don't match, apply is aborted with an error ### Fingerprint Validation Example ```bash theme={null} # Generate plan (captures current state fingerprint) pgschema plan --host prod.db.com --db myapp --user postgres --file schema.sql --output-json plan.json # Time passes... other changes might occur... # Apply checks fingerprint before proceeding pgschema apply --host prod.db.com --db myapp --user postgres --plan plan.json # ✅ Success: Database state matches plan expectations # ❌ Error: schema fingerprint mismatch detected - the database schema has changed since the plan was generated. # # schema fingerprint mismatch - expected: 965b1131737c955e, actual: abc123456789abcd ``` When fingerprint mismatches occur, you need to regenerate the plan: ```bash theme={null} # If apply fails due to fingerprint mismatch pgschema plan --host prod.db.com --db myapp --user postgres --file schema.sql --output-json new-plan.json # Review the updated plan for any unexpected changes diff plan.json new-plan.json # Apply with the new plan pgschema apply --host prod.db.com --db myapp --user postgres --plan new-plan.json ``` ## Online DDL `pgschema` automatically generates migration plans that use online DDL strategies to minimize downtime. When possible, operations like index creation, constraint addition, and schema modifications are executed using PostgreSQL's concurrent and non-blocking features. For detailed information about online DDL patterns and strategies, see the [Online DDL documentation](online-ddl.mdx). # Rollback Source: https://www.pgschema.com/workflow/rollback Rolling back schema changes is inherently risky and may result in data loss. Always test rollback procedures in non-production environments and ensure you have complete database backups before proceeding. The pgschema approach to rollback is to revert your schema file to a previous version and apply the changes using the standard plan/apply workflow. This ensures rollbacks follow the same safety mechanisms as forward migrations. Use git to revert your schema file to the previous working version. ```bash theme={null} # Revert to previous commit (most common case) git checkout HEAD~1 schema.sql # Or find and revert to a specific commit git log --oneline schema.sql git checkout schema.sql ``` This restores your schema file to the state before the problematic migration was applied. Create and review a migration plan that will rollback the database to match the reverted schema file. ```bash theme={null} # Generate rollback plan pgschema plan --host localhost --db myapp --user postgres --file schema.sql \ --output-json rollback_plan.json \ --output-human rollback_plan.txt # Review the rollback plan carefully cat rollback_plan.txt ``` The plan will show exactly what changes pgschema will make to rollback the database to the previous schema state. Apply the rollback plan with appropriate safety measures for production environments. ```bash theme={null} # Execute the rollback pgschema apply --host localhost --db myapp --user postgres \ --plan rollback_plan.json \ --lock-timeout 30s # Verify rollback success pgschema dump --host localhost --db myapp --user postgres > post_rollback_state.sql diff schema.sql post_rollback_state.sql ``` # Tenant Schema Reconciliation Source: https://www.pgschema.com/workflow/tenant-schema This workflow helps you apply schema changes consistently across multiple tenant databases in a multi-tenant architecture. When you need to update the schema structure for all tenants, this workflow ensures every tenant database reaches the same desired schema state, even though each tenant may require different migration steps depending on their current state. Create a master schema template that defines the desired state for all tenant schemas. ```bash theme={null} # Dump the template schema from your reference tenant or development environment pgschema dump --host localhost --db myapp --user postgres --schema tenant_template > tenant_schema.sql ``` Review the schema file to ensure it contains all necessary changes and is ready for deployment across all tenants. Generate migration plans for each tenant to see what changes will be applied before execution. ```bash theme={null} # Create a list of all tenant schema names echo "tenant_001 tenant_002 tenant_003 tenant_test" > tenant_list.txt # Preview changes for a single tenant mkdir -p plans pgschema plan --host localhost --db myapp --user postgres \ --schema tenant_001 --file tenant_schema.sql \ --output-json "plans/tenant_001_plan.json" \ --output-human "plans/tenant_001_plan.txt" # Generate plans for all tenants (optional but recommended) while read tenant; do echo "=== Plan for $tenant ===" pgschema plan --host localhost --db myapp --user postgres \ --schema "$tenant" --file tenant_schema.sql \ --output-json "plans/${tenant}_plan.json" \ --output-human "plans/${tenant}_plan.txt" done < tenant_list.txt ``` Review the plans to ensure the changes are expected and safe for all tenants. The JSON plans can be used later with the [`--plan`](/cli/apply#param-plan) flag to ensure exactly the same migration steps are executed across all tenants. Validate the schema changes on a single test tenant before rolling out to all tenants. ```bash theme={null} # Apply using the pre-generated plan pgschema apply --host localhost --db myapp --user postgres \ --schema tenant_test --plan plans/tenant_test_plan.json # Verify the test tenant schema was updated correctly pgschema dump --host localhost --db myapp --user postgres \ --schema tenant_test > test_result.sql # Compare with expected template diff tenant_schema.sql test_result.sql ``` Perform application testing on the test tenant to ensure functionality works as expected with the new schema. Note that each tenant may have different migration steps depending on their current schema state, but all will reach the same final desired state. Apply the schema changes to all production tenant schemas. ```bash theme={null} # Apply using pre-generated plans while read tenant; do echo "Updating schema: $tenant" pgschema apply --host localhost --db myapp --user postgres \ --schema "$tenant" --plan "plans/${tenant}_plan.json" \ --auto-approve --lock-timeout 30s # Check for errors and log results if [ $? -eq 0 ]; then echo "✅ Successfully updated $tenant" echo "$(date): $tenant - SUCCESS" >> migration.log else echo "❌ Failed to update $tenant" echo "$(date): $tenant - FAILED" >> migration.log exit 1 fi # Add small delay between updates to reduce database load sleep 2 done < tenant_list.txt ``` For production environments, consider adding delays between tenant updates, monitoring for issues, and using the [`--lock-timeout`](/cli/apply#param-lock-timeout) flag to prevent long-running locks. Confirm all tenant schemas have been updated to match the desired template state. ```bash theme={null} # Create verification directory mkdir -p verification # Dump all tenant schemas while read tenant; do echo "Verifying $tenant..." pgschema dump --host localhost --db myapp --user postgres \ --schema "$tenant" > "verification/${tenant}.sql" done < tenant_list.txt # Compare each tenant schema against the template while read tenant; do diff tenant_schema.sql "verification/${tenant}.sql" || { echo "❌ Schema mismatch detected in $tenant - does not match template" exit 1 } echo "✅ $tenant matches template schema" done < tenant_list.txt echo "✅ All tenant schemas match the desired template state" ``` All tenants now have the same schema structure matching the template and are ready for application deployment.