Database Setup
PostgreSQL configuration, schema migrations, connection pooling, and backup strategies for HelloJohn.
Database Setup
HelloJohn uses PostgreSQL as its only required data store. Redis is optional but recommended for rate limiting and session caching at scale.
PostgreSQL requirements
- Version: PostgreSQL 14 or later (16 recommended)
- Extensions: None required (no PostGIS, no pgvector)
- Collation: UTF-8
- Permissions: The database user needs
CREATE,SELECT,INSERT,UPDATE,DELETEprivileges
Creating the database
-- Connect as a superuser (e.g., postgres)
CREATE USER hellojohn WITH PASSWORD 'strongpassword';
CREATE DATABASE hellojohn OWNER hellojohn;
-- Optional: restrict access
REVOKE ALL ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO hellojohn;Connection string format
postgres://USER:PASSWORD@HOST:PORT/DATABASE?sslmode=require| Parameter | Description |
|---|---|
sslmode=disable | No TLS (local dev only) |
sslmode=require | TLS without certificate verification |
sslmode=verify-ca | Verify CA certificate |
sslmode=verify-full | Verify CA + hostname (recommended for production) |
# Local dev
DATABASE_URL=postgres://hellojohn:secret@localhost:5432/hellojohn?sslmode=disable
# Production with TLS
DATABASE_URL=postgres://hellojohn:secret@db.yourdomain.com:5432/hellojohn?sslmode=verify-full
# Supabase
DATABASE_URL=postgres://postgres:[password]@db.[project].supabase.co:5432/postgres?sslmode=require
# Neon
DATABASE_URL=postgres://[user]:[password]@[endpoint].neon.tech/[dbname]?sslmode=requireMigrations
HelloJohn runs migrations automatically on startup. To run migrations manually:
# With the binary
hellojohn migrate
# With Docker
docker compose exec hellojohn hellojohn migrate
# Check migration status
hellojohn migrate statusMigration files are embedded in the binary — there are no external migration files to manage.
Migration safety
HelloJohn migrations are designed to be:
- Non-destructive: Old columns are deprecated before being dropped
- Zero-downtime: Additive changes only in minor versions
- Idempotent: Safe to run multiple times
Connection pooling
For high-traffic deployments, use PgBouncer in front of PostgreSQL.
PgBouncer configuration
/etc/pgbouncer/pgbouncer.ini:
[databases]
hellojohn = host=127.0.0.1 port=5432 dbname=hellojohn
[pgbouncer]
listen_port = 6432
listen_addr = 127.0.0.1
auth_type = scram-sha-256
auth_file = /etc/pgbouncer/userlist.txt
pool_mode = transaction
max_client_conn = 200
default_pool_size = 20/etc/pgbouncer/userlist.txt:
"hellojohn" "strongpassword"Then set:
DATABASE_URL=postgres://hellojohn:strongpassword@127.0.0.1:6432/hellojohn?sslmode=disable
DATABASE_MAX_CONNECTIONS=20Managed poolers
| Service | Pooler |
|---|---|
| Supabase | Built-in PgBouncer (Transaction mode) |
| Neon | Built-in serverless connection pooler |
| RDS | RDS Proxy |
| Railway | Direct connection (no pooler needed at small scale) |
Performance tuning
Recommended PostgreSQL settings for a 2–4 vCPU / 4–8 GB RAM server:
# postgresql.conf
max_connections = 100
shared_buffers = 1GB
effective_cache_size = 3GB
maintenance_work_mem = 256MB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 100
random_page_cost = 1.1
effective_io_concurrency = 200
work_mem = 4MB
min_wal_size = 1GB
max_wal_size = 4GBBackups
Continuous archiving (recommended)
Use WAL-G or pgBackRest for continuous WAL archiving to S3/GCS/Azure.
# Daily base backup with WAL-G
WALG_S3_PREFIX=s3://your-bucket/hellojohn-backups \
AWS_ACCESS_KEY_ID=... \
AWS_SECRET_ACCESS_KEY=... \
wal-g backup-push /var/lib/postgresql/datapg_dump snapshots
For simpler setups, a daily pg_dump to S3 is sufficient for most small deployments:
#!/bin/bash
# /etc/cron.daily/hellojohn-backup
set -e
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="hellojohn_${TIMESTAMP}.dump"
pg_dump -Fc -h localhost -U hellojohn hellojohn > "/tmp/${BACKUP_FILE}"
aws s3 cp "/tmp/${BACKUP_FILE}" "s3://your-bucket/backups/${BACKUP_FILE}"
rm "/tmp/${BACKUP_FILE}"
# Keep last 30 days
aws s3 ls s3://your-bucket/backups/ | \
awk '{print $4}' | \
sort | head -n -30 | \
xargs -I{} aws s3 rm "s3://your-bucket/backups/{}"Restore from pg_dump
# Drop and recreate the database
dropdb hellojohn
createdb hellojohn -O hellojohn
# Restore
pg_restore -h localhost -U hellojohn -d hellojohn hellojohn_20240101_120000.dump
# Run migrations to apply any pending changes
hellojohn migrateMonitoring
Key metrics to watch:
| Metric | Warning | Critical |
|---|---|---|
| Active connections | >80% of max_connections | >90% |
| Query time (p99) | >200ms | >1s |
| Dead tuples | >10,000 per table | >100,000 |
| Cache hit rate | <95% | <90% |
| Replication lag | >30s | >5min |
Use pg_stat_statements, Datadog, or Grafana + postgres_exporter for monitoring.
Managed PostgreSQL recommendations
| Provider | Notes |
|---|---|
| Supabase | Great DX, built-in connection pooler, real-time possible |
| Neon | Serverless autoscaling, branching for staging |
| AWS RDS | Enterprise-grade, Multi-AZ, snapshots built in |
| Railway | Simple, good for small projects |
| Fly.io Postgres | Co-located with app, low latency |
For most teams, Supabase or Neon is the easiest managed option. RDS is preferred for regulated industries (HIPAA, SOC 2).