Skip to main content
The plan command generates a migration plan to apply a desired schema state to a target database schema. It compares the desired state (from a file) with the current state of a specific schema and shows what changes would be applied.

Overview

The plan command follows infrastructure-as-code principles similar to Terraform:
  1. Read the desired state from a SQL file (with include directive support)
  2. Connect to the target database and analyze current state of the specified schema
  3. Compare the two states
  4. Generate a detailed migration plan with proper dependency ordering
  5. Display the plan without making any changes

Basic Usage

# Generate plan to apply schema.sql to the target database
pgschema plan --host localhost --db myapp --user postgres --password mypassword --file schema.sql

# Plan with specific schema
pgschema plan --host localhost --db myapp --user postgres --password mypassword --schema myschema --file schema.sql

# Generate JSON output for automation
pgschema plan --host localhost --db myapp --user postgres --password mypassword --file schema.sql --output-json stdout

# Generate SQL migration script
pgschema plan --host localhost --db myapp --user postgres --password mypassword --file schema.sql --output-sql stdout

# Save multiple output formats simultaneously
pgschema plan --host localhost --db myapp --user postgres --password mypassword --file schema.sql \
  --output-human plan.txt --output-json plan.json --output-sql migration.sql

# Disable colored output (useful for scripts or CI/CD)
pgschema plan --host localhost --db myapp --user postgres --password mypassword --file schema.sql --no-color

Connection Options

--host
string
default:"localhost"
Database server host (env: PGHOST)
--port
integer
default:"5432"
Database server port (env: PGPORT)
--db
string
required
Database name (required) (env: PGDATABASE)
--user
string
required
Database user name (required) (env: PGUSER)
--password
string
Database password (optional, can also use PGPASSWORD env var or .pgpass file)You can provide the password in multiple ways (in order of precedence):
# Create .env file with:
# PGHOST=localhost
# PGPORT=5432
# PGDATABASE=myapp
# PGUSER=postgres
# PGPASSWORD=mypassword

pgschema plan --file schema.sql
Password Resolution Order:
  1. Command line --password flag (highest priority)
  2. PGPASSWORD environment variable
  3. .pgpass file in user’s home directory
  4. PostgreSQL will prompt for password if none found
See dotenv (.env) for detailed configuration options.
--schema
string
default:"public"
Schema name to target for comparison

Plan Options

--file
string
required
Path to desired state SQL schema file
--output-human
string
Output human-readable format to stdout or file pathExamples:
  • --output-human stdout - Display to terminal
  • --output-human plan.txt - Save to file
--output-json
string
Output JSON format to stdout or file pathThis JSON format is the same format accepted by the apply command for executing migration plans.Examples:
  • --output-json stdout - Display to terminal
  • --output-json plan.json - Save to file
--output-sql
string
Output SQL format to stdout or file pathExamples:
  • --output-sql stdout - Display to terminal
  • --output-sql migration.sql - Save to file
--no-color
boolean
default:"false"
Disable colored output for human format when writing to stdoutThis is useful for:
  • Scripts and automation that need to parse output
  • CI/CD environments that don’t support color codes
  • Redirecting output to files where color codes are unwanted
Note: This flag only affects human format output to stdout. File output and JSON/SQL formats are never colored.

Ignoring Objects

You can exclude specific database objects from migration planning using a .pgschemaignore file. See Ignore (.pgschemaignore) for complete documentation.

Examples

Default Human-Readable Output

pgschema plan --host localhost --db myapp --user postgres --file schema.sql
Plan: 2 to add, 1 to modify, 1 to drop.

Summary by type:
  tables: 2 to add, 1 to modify, 1 to drop
  indexes: 1 to add
  functions: 1 to add

Tables:
  + users
  + posts
  ~ products
    + discount_rate (column)
    - old_price (column)
  - legacy_data

Indexes:
  + idx_users_email

Functions:
  + update_timestamp()

Transaction: true

DDL to be executed:
--------------------------------------------------

CREATE TABLE IF NOT EXISTS users (
    id SERIAL PRIMARY KEY,
    name VARCHAR(255) NOT NULL,
    email VARCHAR(255) UNIQUE
);

CREATE TABLE IF NOT EXISTS posts (
    id SERIAL PRIMARY KEY,
    title TEXT NOT NULL,
    user_id INTEGER REFERENCES users(id)
);

CREATE INDEX idx_users_email ON users(email);

CREATE FUNCTION update_timestamp() RETURNS TRIGGER AS $$
BEGIN
    NEW.updated_at = NOW();
    RETURN NEW;
END;
$$ LANGUAGE plpgsql;

ALTER TABLE products ADD COLUMN discount_rate numeric(5,2);

ALTER TABLE products DROP COLUMN old_price;

DROP TABLE legacy_data;

JSON Output for Automation

pgschema plan \
  --host localhost \
  --db myapp \
  --user postgres \
  --file schema.sql \
  --output-json stdout
{
  "version": "1.0.0",
  "pgschema_version": "1.0.0", 
  "created_at": "2025-01-15T10:30:00Z",
  "source_fingerprint": {
    "hash": "abc123def456..."
  },
  "diffs": [
    {
      "sql": "CREATE TABLE IF NOT EXISTS users (\n    id SERIAL PRIMARY KEY,\n    name VARCHAR(255) NOT NULL,\n    email VARCHAR(255) UNIQUE\n);",
      "type": "table",
      "operation": "create",
      "path": "public.users",
      "source": {
        "schema": "public",
        "name": "users",
        "type": "BASE_TABLE",
        "columns": [
          {
            "name": "id",
            "position": 1,
            "data_type": "integer",
            "is_nullable": false,
            "default_value": "nextval('users_id_seq'::regclass)"
          },
          {
            "name": "name",
            "position": 2,
            "data_type": "character varying",
            "is_nullable": false,
            "character_maximum_length": 255
          }
        ],
        "constraints": {
          "users_pkey": {
            "schema": "public",
            "table": "users",
            "name": "users_pkey",
            "type": "PRIMARY_KEY",
            "columns": [
              {
                "name": "id",
                "position": 1
              }
            ]
          }
        }
      },
      "can_run_in_transaction": true
    }
  ]
}

SQL Migration Script

pgschema plan \
  --host localhost \
  --db myapp \
  --user postgres \
  --file schema.sql \
  --output-sql stdout
CREATE TABLE IF NOT EXISTS users (
    id SERIAL PRIMARY KEY,
    name VARCHAR(255) NOT NULL,
    email VARCHAR(255) UNIQUE
);

CREATE TABLE IF NOT EXISTS posts (
    id SERIAL PRIMARY KEY,
    title TEXT NOT NULL,
    user_id INTEGER REFERENCES users(id)
);

CREATE INDEX idx_users_email ON users(email);

CREATE FUNCTION update_timestamp() RETURNS TRIGGER AS $$
BEGIN
    NEW.updated_at = NOW();
    RETURN NEW;
END;
$$ LANGUAGE plpgsql;

ALTER TABLE products ADD COLUMN discount_rate numeric(5,2);

ALTER TABLE products DROP COLUMN old_price;

DROP TABLE legacy_data;

Plan for Specific Schema

pgschema plan \
  --host localhost \
  --db multi_tenant \
  --user postgres \
  --schema tenant1 \
  --file tenant_schema.sql

Use Cases

Pre-deployment Validation

# Check what changes will be applied before deployment
pgschema plan \
  --host prod-db \
  --db myapp \
  --user readonly \
  --file new_schema.sql

# If plan looks good, proceed with apply
pgschema apply \
  --host prod-db \
  --db myapp \
  --user deployer \
  --file new_schema.sql

CI/CD Integration

# GitHub Actions example
- name: Validate Schema Changes
  run: |
    pgschema plan \
      --host ${{ secrets.DB_HOST }} \
      --db ${{ secrets.DB_NAME }} \
      --user ${{ secrets.DB_USER }} \
      --file schema/proposed.sql \
      --output-json plan.json \
      --no-color
    
    # Check if there are destructive changes
    if jq -e '.summary.to_destroy > 0' plan.json; then
      echo "Warning: Destructive changes detected!"
      exit 1
    fi

Change Tracking

# Generate plan and save for audit
DATE=$(date +%Y%m%d_%H%M%S)
pgschema plan \
  --host localhost \
  --db myapp \
  --user postgres \
  --file schema.sql \
  --output-sql "migrations/plan_${DATE}.sql"

Multiple Output Formats

You can generate multiple output formats simultaneously:
# Save all formats for comprehensive documentation
pgschema plan \
  --host localhost \
  --db myapp \
  --user postgres \
  --file schema.sql \
  --output-human plan.txt \
  --output-json plan.json \
  --output-sql migration.sql

# Display human-readable format and save JSON for automation
pgschema plan \
  --host localhost \
  --db myapp \
  --user postgres \
  --file schema.sql \
  --output-human stdout \
  --output-json automation/plan.json \
  --no-color
Note: Only one output format can use stdout. If no output flags are specified, the command defaults to human-readable output to stdout with colors enabled.

Comparison Direction

The plan command is unidirectional: it always plans changes from the current state (database) to the desired state (file).
Current State (Database) → Desired State (File)
This ensures:
  • Consistent infrastructure-as-code workflow
  • Clear source of truth (the file)
  • Predictable change direction

Include Directive Support

The plan command supports include directives in schema files, allowing you to organize your schema across multiple files:
-- main.sql
\i tables/users.sql
\i tables/posts.sql
\i functions/triggers.sql
The include processor will resolve all includes relative to the directory containing the main schema file.
I