9.2 KiB
Connection Pooling
Optimizing PostgreSQL connection pooling for event streaming.
Overview
Connection pooling is critical for PostgreSQL performance in event streaming scenarios. Npgsql (the .NET PostgreSQL driver) provides built-in connection pooling that significantly improves throughput and reduces latency.
Default Pooling Behavior
Npgsql enables connection pooling by default:
// Connection pooling enabled automatically
var connectionString = "Host=localhost;Database=eventstore;Username=postgres;Password=postgres";
builder.Services.AddPostgresEventStreaming(connectionString);
// Pool created automatically with defaults:
// - Minimum Pool Size: 1
// - Maximum Pool Size: 100
// - Connection Idle Lifetime: 300 seconds
Pool Configuration
Connection String Parameters
| Parameter | Default | Description |
|---|---|---|
Minimum Pool Size |
1 | Minimum connections maintained |
Maximum Pool Size |
100 | Maximum connections allowed |
Connection Idle Lifetime |
300 | Seconds before idle connection closed |
Connection Pruning Interval |
10 | Seconds between pruning checks |
Enlist |
true | Auto-enlist in TransactionScope |
Basic Configuration
var connectionString =
"Host=localhost;" +
"Database=eventstore;" +
"Username=postgres;" +
"Password=postgres;" +
"Minimum Pool Size=10;" + // Keep 10 warm connections
"Maximum Pool Size=100"; // Allow up to 100 connections
builder.Services.AddPostgresEventStreaming(connectionString);
High-Throughput Configuration
For high-traffic scenarios:
var connectionString =
"Host=localhost;Database=eventstore;Username=postgres;Password=postgres;" +
"Minimum Pool Size=20;" + // More warm connections
"Maximum Pool Size=200;" + // Higher ceiling
"Connection Idle Lifetime=300;" + // 5 minutes
"Connection Pruning Interval=10;" + // Check every 10 seconds
"Command Timeout=30;" + // 30-second query timeout
"Timeout=15"; // 15-second connection timeout
Low-Latency Configuration
For latency-sensitive applications:
var connectionString =
"Host=localhost;Database=eventstore;Username=postgres;Password=postgres;" +
"Minimum Pool Size=50;" + // Many warm connections
"Maximum Pool Size=100;" + // Limit overhead
"Connection Idle Lifetime=600;" + // Keep connections longer
"No Reset On Close=true"; // Skip connection reset
Sizing the Pool
Calculate Pool Size
Formula:
Pool Size = (Number of Workers) × (Concurrent Operations per Worker) × 1.2
Example:
Workers: 4
Concurrent operations: 10
Pool size: 4 × 10 × 1.2 = 48
Use: Minimum Pool Size = 50, Maximum Pool Size = 100
Per-Service Sizing
Different services may need different pool sizes:
// Write-heavy service (event publishing)
services.AddPostgresEventStreaming(
"Host=localhost;Database=eventstore;..." +
"Minimum Pool Size=30;Maximum Pool Size=100");
// Read-heavy service (projections)
services.AddPostgresEventStreaming(
"Host=localhost;Database=eventstore;..." +
"Minimum Pool Size=50;Maximum Pool Size=200");
Monitoring Connection Pool
Log Pool Statistics
using Npgsql;
// Get pool statistics
var stats = NpgsqlConnection.GetPoolStatistics();
foreach (var (connectionString, poolStats) in stats)
{
_logger.LogInformation(
"Pool stats: Total={Total}, Idle={Idle}, Busy={Busy}",
poolStats.Total,
poolStats.Idle,
poolStats.Busy);
}
Metrics Integration
public class ConnectionPoolMetrics : BackgroundService
{
protected override async Task ExecuteAsync(CancellationToken ct)
{
while (!ct.IsCancellationRequested)
{
var stats = NpgsqlConnection.GetPoolStatistics();
foreach (var (_, poolStats) in stats)
{
_metrics.RecordGauge("postgres.pool.total", poolStats.Total);
_metrics.RecordGauge("postgres.pool.idle", poolStats.Idle);
_metrics.RecordGauge("postgres.pool.busy", poolStats.Busy);
}
await Task.Delay(TimeSpan.FromSeconds(10), ct);
}
}
}
Best Practices
✅ DO
1. Set appropriate minimum pool size:
// ✅ Good - Warm connections ready
"Minimum Pool Size=20;Maximum Pool Size=100"
2. Use connection pooling:
// ✅ Good - Reuse connections
using var connection = new NpgsqlConnection(connectionString);
await connection.OpenAsync();
// Connection returned to pool on dispose
3. Dispose connections properly:
// ✅ Good - using statement
using (var connection = new NpgsqlConnection(connectionString))
{
await connection.OpenAsync();
// Use connection
} // Returned to pool
// Or
await using var connection = new NpgsqlConnection(connectionString);
4. Set timeouts:
// ✅ Good - Prevent hung connections
"Command Timeout=30;Timeout=15"
5. Monitor pool exhaustion:
try
{
using var connection = new NpgsqlConnection(connectionString);
await connection.OpenAsync();
}
catch (NpgsqlException ex) when (ex.Message.Contains("pool"))
{
_logger.LogError("Connection pool exhausted!");
_metrics.IncrementCounter("postgres.pool.exhausted");
}
❌ DON'T
1. Don't create new connection per operation:
// ❌ Bad - Exhausts pool
for (int i = 0; i < 1000; i++)
{
using var connection = new NpgsqlConnection(connectionString);
await connection.OpenAsync();
// Process
} // 1000 connections created!
2. Don't set pool size too low:
// ❌ Bad - Will bottleneck
"Maximum Pool Size=5" // Too small for high throughput
3. Don't forget to dispose:
// ❌ Bad - Connection leak
var connection = new NpgsqlConnection(connectionString);
await connection.OpenAsync();
// Never disposed - pool exhaustion!
4. Don't disable pooling:
// ❌ Bad - Poor performance
"Pooling=false" // Creates new connection every time
Troubleshooting
Pool Exhaustion
Symptoms:
Npgsql.NpgsqlException: The connection pool has been exhausted
Solutions:
- Increase pool size:
"Maximum Pool Size=200" // Increase limit
- Fix connection leaks:
// Check for connections not being disposed
// Use using statements or await using
- Reduce concurrency:
// Limit parallel operations
var semaphore = new SemaphoreSlim(50); // Max 50 concurrent
await semaphore.WaitAsync();
try
{
// Database operation
}
finally
{
semaphore.Release();
}
Connection Timeouts
Symptoms:
Npgsql.NpgsqlException: Timeout during connection attempt
Solutions:
- Increase connection timeout:
"Timeout=30" // 30 seconds to establish connection
- Increase minimum pool size:
"Minimum Pool Size=20" // More warm connections
- Check database server load:
-- Check active connections
SELECT COUNT(*) FROM pg_stat_activity;
-- Check slow queries
SELECT pid, now() - query_start as duration, query
FROM pg_stat_activity
WHERE state = 'active'
ORDER BY duration DESC;
Slow Queries
Symptoms: Commands take longer than expected.
Solutions:
- Set command timeout:
"Command Timeout=30" // Cancel after 30 seconds
- Optimize queries:
-- Add indexes
CREATE INDEX idx_events_stream_offset ON events(stream_name, offset);
-- Analyze query plans
EXPLAIN ANALYZE
SELECT * FROM events WHERE stream_name = 'orders' AND offset > 1000;
- Use read replicas:
// Write to primary
var writeConnectionString = "Host=primary.db;...";
// Read from replica
var readConnectionString = "Host=replica.db;...";
Advanced Configuration
Multiple Pools
Use different pools for different purposes:
// Write pool
services.AddSingleton<IWriteEventStore>(sp =>
{
var connectionString = "Host=primary;...;Maximum Pool Size=100";
return new PostgresEventStreamStore(connectionString);
});
// Read pool
services.AddSingleton<IReadEventStore>(sp =>
{
var connectionString = "Host=replica;...;Maximum Pool Size=200";
return new PostgresEventStreamStore(connectionString);
});
Connection Multiplexing
PostgreSQL supports connection multiplexing for commands:
// Configure multiplexing
var connectionString =
"Host=localhost;Database=eventstore;..." +
"Multiplexing=true;" + // Enable multiplexing
"Maximum Pool Size=10"; // Fewer connections needed
Benefits:
- Fewer physical connections
- Better for serverless/container environments
- Reduced database connection overhead
Limitations:
- Not compatible with all features (transactions, LISTEN/NOTIFY)
- Higher latency for individual commands