Skip to content

Migration Guide

UberLotto v2 uses Supabase migrations to manage database schema changes. This guide covers the migration directory structure, existing migrations, and how to create new ones.

Migration Directory

uberlotto-v2/
└── supabase/
    └── migrations/
        ├── 20251113002700_create_security_tables.sql
        ├── 20251113002700_create_security_tables_rollback.sql
        └── 20260218000000_payment_logging_system.sql

Migrations follow the Supabase CLI naming convention:

<YYYYMMDDHHMMSS>_<description>.sql

Existing Migrations

20251113002700_create_security_tables.sql

Date: 2025-11-13 Purpose: Plisio Payment Gateway Hardening — replay attack prevention and security logging.

Tables Created

TablePrimary KeyDescription
webhook_noncesid BIGSERIALReplay attack prevention via SHA-256 nonce hashes
security_eventsid BIGSERIALSecurity audit log with GDPR-compliant data masking

WARNING

Both tables use BIGSERIAL primary keys (auto-incrementing bigint), not UUID. This differs from the other tables in the database which use BIGINT GENERATED BY DEFAULT AS IDENTITY.

Indexes Created

webhook_nonces:

  • idx_webhook_nonces_nonce_hash — fast duplicate lookup
  • idx_webhook_nonces_expires_at — efficient cleanup
  • idx_webhook_nonces_txn_id — partial index on txn_id
  • idx_webhook_nonces_created_at — descending time order

security_events:

  • idx_security_events_event_type
  • idx_security_events_severity
  • idx_security_events_created_at — descending time order
  • idx_security_events_txn_id — partial index
  • idx_security_events_client_ip — partial index
  • idx_security_events_status
  • idx_security_events_source
  • idx_security_events_type_created — composite index (event_type, created_at DESC)

RLS Policies

Both tables enable Row Level Security with:

  • Service role full accessauth.role() = 'service_role'
  • Admin read access (security_events only) — authenticated users with role = 'admin' in JWT

The migration uses conditional policy creation (IF NOT EXISTS via pg_policies check) to safely handle re-runs.

Database Functions Created

FunctionPurposeSchedule
cleanup_expired_nonces()Deletes nonces past their expires_atEvery 5 minutes
cleanup_old_security_events()Removes events older than 1 yearMonthly
get_recent_security_events()Queries recent events with filteringOn demand

All functions are created with SECURITY DEFINER and granted EXECUTE permission to the service_role.

Rollback

A companion rollback script is provided at:

supabase/migrations/20251113002700_create_security_tables_rollback.sql

It drops all functions and tables created by the migration:

sql
DROP FUNCTION IF EXISTS get_recent_security_events(TEXT, TEXT, INTEGER, INTEGER) CASCADE;
DROP FUNCTION IF EXISTS cleanup_old_security_events() CASCADE;
DROP FUNCTION IF EXISTS cleanup_expired_nonces() CASCADE;
DROP TABLE IF EXISTS security_events CASCADE;
DROP TABLE IF EXISTS webhook_nonces CASCADE;

DANGER

Running the rollback deletes all security event logs and webhook nonces permanently. Back up data before running.


20260218000000_payment_logging_system.sql

Date: 2026-02-18 Purpose: Unified Payment Transaction Logging — replaces wallet_load_credit and pending_transactions with a single payment_transactions table and an immutable payment_transaction_events audit log.

What It Drops

ObjectTypeReason
wallet_load_creditTableReplaced by payment_transactions
pending_transactionsTableReplaced by payment_transactions
pending_transactions_id_seqSequenceOrphaned after table drop

DANGER

The migration drops the legacy tables. Existing data is migrated into payment_transactions before the drop — see Data Migration Logic below.

Tables Created

TablePrimary KeyDescription
payment_transactionsid UUID (gen_random_uuid)Unified payment records for MoonPay + Plisio
payment_transaction_eventsid BIGINT (GENERATED ALWAYS AS IDENTITY)Immutable status change audit log

Indexes Created (10)

payment_transactions:

  • idx_pt_user_email — user email lookups
  • idx_pt_user_id — partial index on user_id (WHERE NOT NULL)
  • idx_pt_active_status — partial index on active statuses (pending/processing/awaiting_confirmation)
  • idx_pt_provider_status — composite (provider, status)
  • idx_pt_created_atcreated_at DESC
  • idx_pt_provider_txn_id — partial index (WHERE NOT NULL)
  • idx_pt_order_number — order number lookups
  • idx_pt_expires_pending — partial index on expires_at (WHERE status = 'pending')

payment_transaction_events:

  • idx_pte_transaction_id — event lookups by transaction
  • idx_pte_txn_created — composite (transaction_id, created_at DESC)

Functions Created

FunctionTypeDescription
set_updated_at()TriggerAuto-sets updated_at on row update
fn_log_transaction_status_change()TriggerLogs status transitions to events table; reads app.changed_by and app.change_reason session vars
fn_guard_max_events_per_transaction()TriggerPrevents more than 50 events per transaction
fn_guard_immutable_completed()TriggerPrevents reopening completed/refunded transactions
expire_stale_transactions()UtilityCron: marks pending transactions past expires_at as expired
archive_old_events(interval)UtilityCleanup: removes events older than specified interval (default 90 days)
get_transaction_summary(email)UtilityReturns aggregated stats for admin dashboards
check_cancellation_rate(email)UtilityAnti-abuse: returns FALSE if 10+ cancels/hour

Triggers Created

TriggerTableEventFunction
trg_payment_transactions_updated_atpayment_transactionsBEFORE UPDATEset_updated_at()
trg_log_status_changepayment_transactionsAFTER UPDATE (status)fn_log_transaction_status_change()
trg_guard_max_eventspayment_transaction_eventsBEFORE INSERTfn_guard_max_events_per_transaction()
trg_guard_immutable_completedpayment_transactionsBEFORE UPDATE (status)fn_guard_immutable_completed()

RLS Policies

Both tables enable Row Level Security with:

  • Service role full accessservice_role bypasses RLS automatically
  • Authenticated user read — users can read their own transactions (matched by user_email = auth.jwt()->>'email')
  • Authenticated user events read — users can read events for their own transactions (via subquery on payment_transactions)

Data Migration Logic

The migration includes data migration from the legacy tables:

  1. From pending_transactions: Maps customer_emailuser_email, customer_nameuser_name, customer_lastnameuser_lastname, customer_iduser_id, plisio_txn_idprovider_txn_id, plisio_invoice_urlprovider_invoice_url, amount_usdamount. Sets provider = 'plisio' for all migrated rows.

  2. From wallet_load_credit: Maps user_email, user_name, user_lastname, user_wallet_addresswallet_address, tx_hash, tx_amountamount, tx_currencycrypto_currency, tx_amount_usdamount_usd, tx_conversion_rateconversion_rate, tx_statusstatus (mapped: 'confirmed' → 'completed'). Sets provider = 'plisio' and generates order_number as 'LEGACY-' || id for each row.

TIP

Both data migration steps use INSERT ... SELECT for atomicity. If the legacy tables are empty, the migration still succeeds — the INSERTs simply insert zero rows.

No Rollback Script

Unlike the security tables migration, this migration does not include a rollback script because the data migration is destructive. If a rollback is needed, restore from a database backup taken before running the migration.

Tables Without Migrations

The following tables were created directly in the Supabase Dashboard (no migration files exist):

TableNotes
jackpotsCreated manually; stores current jackpot amounts
past_drawingsCreated manually; stores historical drawing results

TIP

If you need to recreate these tables in a new environment, refer to the table definitions in the Supabase Database Guide.

How to Run Migrations

Prerequisites

Install the Supabase CLI:

bash
npm install -g supabase

Link your project:

bash
supabase link --project-ref iwmsnkgimodfucvwecvc

Apply Migrations

Push all pending migrations to the remote database:

bash
supabase db push

To see migration status:

bash
supabase migration list

Apply a Specific Migration Manually

If you need to run a migration manually (e.g., in the Supabase SQL Editor):

  1. Open Supabase Dashboard > SQL Editor
  2. Paste the contents of the migration file
  3. Click Run

Creating New Migrations

Using the Supabase CLI

Generate a new migration file:

bash
supabase migration new <description>

This creates a timestamped file in supabase/migrations/:

supabase/migrations/20260210120000_<description>.sql

Migration Best Practices

  1. Use IF NOT EXISTS / IF EXISTS — Makes migrations idempotent and safe to re-run

  2. Always create a rollback script — Name it <timestamp>_<description>_rollback.sql

  3. Include verification queries — Add commented-out queries at the bottom for testing:

    sql
    -- Verify: SELECT table_name FROM information_schema.tables
    --   WHERE table_schema = 'public' AND table_name = 'your_table';
  4. Enable RLS immediately — Every new table should have RLS enabled with at minimum a service role policy:

    sql
    ALTER TABLE your_table ENABLE ROW LEVEL SECURITY;
    
    CREATE POLICY "Service role full access" ON your_table
      FOR ALL USING (auth.role() = 'service_role');
  5. Add indexes for query patterns — Index columns used in WHERE, ORDER BY, and JOIN clauses

  6. Add table/column comments — Document purpose directly in the schema:

    sql
    COMMENT ON TABLE your_table IS 'Description of what this table stores';
    COMMENT ON COLUMN your_table.column IS 'Description of this column';
  7. Test locally first — Use supabase start to run a local Supabase instance and validate migrations before pushing to production

Migration Template

sql
-- =====================================================================
-- Migration: <Description>
-- Purpose: <What this migration does>
-- Date: <YYYY-MM-DD>
-- =====================================================================

-- Create table
CREATE TABLE IF NOT EXISTS your_table (
  id BIGSERIAL PRIMARY KEY,
  created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
  -- ... columns
);

-- Indexes
CREATE INDEX IF NOT EXISTS idx_your_table_created_at
  ON your_table(created_at DESC);

-- RLS
ALTER TABLE your_table ENABLE ROW LEVEL SECURITY;

DO $$
BEGIN
  IF NOT EXISTS (
    SELECT 1 FROM pg_policies
    WHERE schemaname = 'public'
    AND tablename = 'your_table'
    AND policyname = 'Service role full access'
  ) THEN
    CREATE POLICY "Service role full access" ON your_table
      FOR ALL USING (auth.role() = 'service_role');
  END IF;
END $$;

-- Comments
COMMENT ON TABLE your_table IS 'Description';

-- Verification (run manually)
-- SELECT table_name FROM information_schema.tables
--   WHERE table_schema = 'public' AND table_name = 'your_table';

UberLotto Technical Documentation