Indexer Deployment Guide
Comprehensive deployment guide for the Seesaw Indexer service.
Overview
The indexer is a critical infrastructure component that:
- Subscribes to Solana transactions via WebSocket
- Processes and indexes protocol events
- Maintains a PostgreSQL database of market state
- Serves REST APIs for clients
Deployment Requirements
Hardware Requirements
| Component | Minimum | Recommended |
|---|---|---|
| CPU | 2 cores | 4 cores |
| Memory | 2 GB | 4 GB |
| Storage | 20 GB | 100 GB |
| Network | 100 Mbps | 1 Gbps |
Software Requirements
| Dependency | Version | Notes |
|---|---|---|
| Node.js | 20.x | LTS version |
| PostgreSQL | 14+ | With sufficient connections |
| Docker | 24+ | Optional, for containerized deployment |
Single vs Multi-Instance Deployment
Critical Warning: In-Memory Storage Limitations
The indexer uses in-memory storage for two security-critical components:
-
Nonce Tracking (
src/middleware/auth.ts)- Prevents replay attacks on authenticated endpoints
- Stores used nonces with expiration timestamps
-
Rate Limiting (
src/utils/rateLimiter.ts)- Prevents API abuse and DoS attacks
- Tracks request counts per client IP
Security Implications by Deployment Model
Single-Instance Deployment (Safe)
┌─────────────────┐
All requests → │ Indexer │
│ (single) │
│ │
│ ┌─────────────┐ │
│ │ Nonce Store │ │ ← All nonces in one place
│ └─────────────┘ │
│ ┌─────────────┐ │
│ │ Rate Limiter│ │ ← All counts in one place
│ └─────────────┘ │
└─────────────────┘
- All requests hit the same nonce store and rate limiter
- Replay attacks are correctly detected and blocked
- Rate limits are accurately enforced
Multi-Instance Deployment (Vulnerable)
┌─────────────────┐
Request A → │ Indexer #1 │ ← Has nonce X
│ ┌─────────────┐ │
│ │ Nonce: {X} │ │
│ └─────────────┘ │
└─────────────────┘
┌─────────────────┐
Replay of A → │ Indexer #2 │ ← Does NOT have nonce X
│ ┌─────────────┐ │ (replay succeeds!)
│ │ Nonce: {} │ │
│ └─────────────┘ │
└─────────────────┘
Vulnerabilities:
- Replay attacks: A request validated on instance #1 can be replayed to instance #2
- Rate limit bypass: Effective limit = configured limit x number of instances
Deployment Recommendations
| Scenario | Recommendation |
|---|---|
| Development | Single instance |
| Staging | Single instance |
| Production (current) | Single instance only |
| Production (scaled) | Single instance OR implement Redis-backed storage |
Single-Instance Deployment
Docker Compose
version: '3.8'
services:
indexer:
image: seesaw/indexer:latest
deploy:
replicas: 1 # CRITICAL: Do not increase without Redis
ports:
- '3001:3001'
environment:
- NODE_ENV=production
- DATABASE_URL=postgresql://user:pass@postgres:5432/seesaw
- SEESAW_PROGRAM_ID=SEEsawgSrxRsgtKRbaThZFEKrVqX3Y64hDipTWyi8F8
- SOLANA_RPC_URL=https://your-rpc-provider.com
- SOLANA_WS_URL=wss://your-rpc-provider.com
- TRUST_PROXY=true # If behind load balancer
depends_on:
postgres:
condition: service_healthy
healthcheck:
test: ['CMD', 'wget', '--spider', '-q', 'http://localhost:3001/health']
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
postgres:
image: postgres:15-alpine
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: pass
POSTGRES_DB: seesaw
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ['CMD-SHELL', 'pg_isready -U user -d seesaw']
interval: 10s
timeout: 5s
retries: 5
volumes:
postgres_data:
Kubernetes (Single Replica)
apiVersion: apps/v1
kind: Deployment
metadata:
name: seesaw-indexer
spec:
replicas: 1 # CRITICAL: Do not increase without Redis
strategy:
type: Recreate # Ensures only one instance during updates
selector:
matchLabels:
app: seesaw-indexer
template:
metadata:
labels:
app: seesaw-indexer
spec:
containers:
- name: indexer
image: seesaw/indexer:latest
ports:
- containerPort: 3001
env:
- name: NODE_ENV
value: 'production'
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: indexer-secrets
key: database-url
- name: SEESAW_PROGRAM_ID
valueFrom:
configMapKeyRef:
name: indexer-config
key: program-id
- name: TRUST_PROXY
value: 'true'
livenessProbe:
httpGet:
path: /health
port: 3001
initialDelaySeconds: 10
periodSeconds: 30
readinessProbe:
httpGet:
path: /ready
port: 3001
initialDelaySeconds: 5
periodSeconds: 10
resources:
requests:
memory: '512Mi'
cpu: '250m'
limits:
memory: '2Gi'
cpu: '1000m'
Environment Variables
Required
| Variable | Description | Example |
|---|---|---|
DATABASE_URL | PostgreSQL connection string | postgresql://user:pass@host:5432/db |
SEESAW_PROGRAM_ID | Solana program address | SEEsawgSrxRsgtKRbaThZFEKrVqX3Y64hDipTWyi8F8 |
SOLANA_RPC_URL | Solana RPC endpoint | https://api.mainnet-beta.solana.com |
Recommended for Production
| Variable | Default | Description |
|---|---|---|
NODE_ENV | development | Set to production |
SOLANA_WS_URL | (derived) | WebSocket endpoint |
TRUST_PROXY | false | Set true if behind reverse proxy |
LOG_LEVEL | info | Log verbosity |
SENTRY_DSN | (empty) | Error tracking |
Security Configuration
| Variable | Default | Description |
|---|---|---|
CORS_ALLOWED_ORIGINS | (empty) | Comma-separated allowed origins |
RATE_LIMIT_MAX_REQUESTS | 100 | Max requests per window |
RATE_LIMIT_WINDOW_MS | 60000 | Rate limit window (ms) |
Database Tuning
| Variable | Default | Description |
|---|---|---|
DB_POOL_SIZE | 10 | Connection pool size |
DB_POOL_TIMEOUT | 30 | Pool timeout (seconds) |
DB_CONNECT_TIMEOUT | 10 | Connection timeout (seconds) |
Health Check Endpoints
/health - Detailed Health Status
Returns comprehensive health information for monitoring dashboards.
curl http://localhost:3001/health
Response:
{
"status": "healthy",
"slotLag": 5,
"lastProcessedSlot": "123456789",
"lastActivityAt": "2024-01-15T10:30:00Z",
"dbConnected": true,
"rpcConnected": true,
"indexerRunning": true,
"uptime": 3600,
"timestamp": "2024-01-15T10:30:05Z",
"checks": [
{ "name": "database", "status": "pass", "message": "Connected" },
{ "name": "rpc", "status": "pass", "message": "Connected" },
{ "name": "indexer", "status": "pass", "message": "Running" },
{ "name": "slot_lag", "status": "pass", "message": "Lag: 5 slots" }
]
}
Status values:
healthy: All checks passingdegraded: Some non-critical checks failingunhealthy: Critical checks failing (returns HTTP 503)
/ready - Readiness Probe
Lightweight check for load balancer routing decisions.
curl http://localhost:3001/ready
Response:
{
"ready": true,
"dbConnected": true,
"indexerRunning": true
}
Use this endpoint for:
- Kubernetes readiness probes
- Load balancer health checks
- Service mesh routing
/metrics - Prometheus Metrics
Exposes Prometheus-format metrics for monitoring.
curl http://localhost:3001/metrics
Key metrics:
seesaw_indexer_transactions_processed_totalseesaw_indexer_slot_lagseesaw_indexer_processing_duration_secondsseesaw_api_requests_totalseesaw_api_request_duration_secondsseesaw_rate_limit_exceeded_total
Scaling Guidance
When Single-Instance Is Sufficient
Single-instance deployment is appropriate when:
- API request volume < 1000 req/sec
- Acceptable brief downtime during deployments
- Cost optimization is important
When to Consider Multi-Instance
Consider multi-instance deployment when:
- High availability is critical (99.99% uptime SLA)
- API request volume > 1000 req/sec
- Zero-downtime deployments required
Multi-Instance Requirements
Before deploying multiple instances, you MUST:
-
Implement Redis-backed nonce storage
// Replace in src/middleware/auth.ts interface NonceStore { checkAndMark(nonce: string, expirationMs: number): Promise<boolean>; } -
Implement Redis-backed rate limiting
// Replace in src/utils/rateLimiter.ts // Use Redis sorted sets for sliding window -
Add Redis health checks
- Include Redis connectivity in
/healthendpoint - Add Redis metrics to
/metricsendpoint
- Include Redis connectivity in
-
Handle Redis connection failures
- Fail open vs fail closed decision
- Circuit breaker pattern for Redis calls
Redis Architecture for Multi-Instance
Graceful Shutdown
The indexer handles shutdown signals properly:
- SIGTERM/SIGINT received
- Stops accepting new HTTP connections
- Waits up to 10 seconds for in-flight requests
- Stops rate limiter cleanup interval
- Stops background jobs
- Stops transaction subscriber
- Disconnects from database
- Exits with code 0
Kubernetes Termination
spec:
terminationGracePeriodSeconds: 45 # Allow time for graceful shutdown
Docker Stop
# Graceful stop (sends SIGTERM)
docker stop --time 30 indexer
# Force stop (last resort)
docker kill indexer
Troubleshooting
High Slot Lag
Symptoms: /health shows slotLag > 100
Causes:
- RPC provider rate limiting
- Network latency
- Database write bottleneck
Solutions:
- Use dedicated RPC provider
- Increase
DB_POOL_SIZE - Check database performance
Database Connection Errors
Symptoms: /health shows dbConnected: false
Causes:
- Connection pool exhausted
- Database overloaded
- Network issues
Solutions:
- Increase
DB_POOL_SIZE - Check database connection limits
- Verify network connectivity
Rate Limit Metrics Anomalies
Symptoms: Rate limit metrics show unexpected patterns
Causes:
- Multiple instances deployed (limits bypassed)
TRUST_PROXYmisconfigured- Load balancer not forwarding client IP
Solutions:
- Verify single-instance deployment
- Check
X-Forwarded-Forheader configuration - Review load balancer settings
Security Checklist
Before production deployment:
- Single instance configured (or Redis implemented)
-
NODE_ENV=productionset -
SOLANA_RPC_URLis not devnet -
TRUST_PROXYmatches deployment topology -
CORS_ALLOWED_ORIGINSis restrictive - Database credentials are secrets (not env vars in config)
- Sentry DSN configured for error tracking
- Health checks configured in orchestrator
- Prometheus scraping configured
- Log aggregation configured
Next Steps
- Set up Monitoring
- Review Cranking for full protocol operation
- Configure Alerting