PostgreSQL Message Broker

Brighter supports for using PostgreSQL as a message broker, enabling pub/sub messaging patterns using your existing PostgreSQL infrastructure.

Overview

The PostgreSQL message broker uses a table-based queue approach where messages are stored in a PostgreSQL table and retrieved by consumers. This provides a lightweight messaging solution that leverages your existing PostgreSQL database without requiring additional message broker infrastructure.

How It Works

  1. Producer: Inserts messages into a queue store table

  2. Consumer: Retrieves messages from the queue store table based on visibility timeout

  3. Acknowledgement: Deletes processed messages from the table

  4. Reject/Requeue: Deletes or updates messages based on processing outcome

The system uses a visibility timeout mechanism (similar to AWS SQS) where messages become invisible to other consumers once retrieved, preventing duplicate processing.


Benefits

Use Existing Infrastructure

  • No additional services: Uses your existing PostgreSQL database

  • Simplified operations: One less service to manage, monitor, and maintain

  • Reduced costs: No separate message broker licensing or infrastructure

Transactional Messaging

  • Atomic operations: Messages and business data in the same database

  • Strong consistency: ACID guarantees for message operations

  • Simplified transactions: No distributed transactions needed

Familiar Tooling

  • Standard SQL: Use familiar PostgreSQL tools for monitoring and debugging

  • Built-in monitoring: Query tables directly to see queue depth and message status

  • Easy troubleshooting: Direct database access for investigating issues


When to Use

Ideal For:

  • Low to moderate message volumes (< 1000 messages/second)

  • Applications already using PostgreSQL for data persistence

  • Transactional messaging scenarios requiring atomicity with database operations

  • Development and testing with simplified infrastructure

  • Microservices where each service has its own PostgreSQL database

Not Suitable For:

  • High-volume scenarios (> 1000 messages/second)

  • Large messages (PostgreSQL has practical limits for row sizes)

  • Complex routing requirements (better served by RabbitMQ or Kafka)

  • Cross-organization messaging (where dedicated broker provides better isolation)


Limitations

Performance Constraints

  • Database overhead: Message operations add load to your database

  • Polling model: Consumers poll the database periodically (not push-based)

  • Scalability limits: Database connection pooling and table locking can become bottlenecks

Message Size

  • Practical limit: ~1MB per message (PostgreSQL row size limits)

  • Recommendation: Use Claim Check pattern for large payloads

No Native Routing

  • Simple pub/sub only: No complex routing like RabbitMQ exchanges

  • Queue-based: Each consumer reads from a specific queue (channel)

  • Manual fanout: Publish to multiple channels for fanout patterns


Configuration

NuGet Package

Install the PostgreSQL messaging gateway package:

Database Table

Create the queue store table in your PostgreSQL database:

Index Requirements: The index on (queue, visible_timeout) is critical for performance.


Producer Configuration

Basic Producer Setup

Publishing Messages


Consumer Configuration

Basic Consumer Setup

Consuming Messages


Configuration Options

PostgresPublication

Property
Type
Description
Default

Topic

RoutingKey

Message routing key (queue name)

Required

SchemaName

string?

Database schema name

"public"

QueueStoreTable

string?

Queue store table name

From configuration

BinaryMessagePayload

bool?

Use JSONB instead of JSON

From configuration

PostgresSubscription

Property
Type
Description
Default

ChannelName

ChannelName

Consumer channel name

Required

RoutingKey

RoutingKey

Message routing key (queue name)

Required

BufferSize

int

Messages to retrieve per poll

1

NoOfPerformers

int

Concurrent consumer threads

1

VisibleTimeout

TimeSpan

Message visibility timeout

30 seconds

SchemaName

string?

Database schema name

"public"

QueueStoreTable

string?

Queue store table name

From configuration

BinaryMessagePayload

bool?

Expect JSONB payloads

From configuration

TableWithLargeMessage

bool

Support for large messages as streams

false

RelationalDatabaseConfiguration

Property
Type
Description

ConnectionString

string

PostgreSQL connection string

QueueStoreTable

string

Default queue store table name

SchemaName

string

Default schema name

BinaryMessagePayload

bool

Default to JSONB


Message Visibility

The PostgreSQL message broker uses a visibility timeout mechanism to prevent duplicate processing:

How It Works

  1. Message Published: visible_timeout set to CURRENT_TIMESTAMP

  2. Message Retrieved: Consumer reads messages where visible_timeout <= CURRENT_TIMESTAMP

  3. Processing: Message becomes invisible to other consumers (timeout not updated)

  4. Acknowledged: Message deleted from table

  5. Timeout Expires: If not acknowledged, message becomes visible again

Visibility Timeout Example

Recommendation: Set visibility timeout to 2-3x your expected processing time to account for retries and delays.


JSON vs JSONB

PostgreSQL supports two JSON data types:

Feature
JSON
JSONB

Storage

Text-based

Binary

Performance

Slower queries

Faster queries

Size

Smaller

Larger (pre-parsed)

Indexing

Limited

Full indexing support

Recommendation

Low volume

Production use

Configuration


Scheduled Messages

PostgreSQL message broker supports message scheduling using the visibility timeout:

How it works: The visible_timeout is set to CURRENT_TIMESTAMP + delay, making the message invisible until the scheduled time.


Transactional Messaging

A key advantage of PostgreSQL as a message broker is transactional messaging with your business data:

Using the Outbox Pattern

See Outbox Pattern and PostgreSQL Outbox for more details.


Monitoring and Observability

Query Queue Depth

Query In-Flight Messages

Find Stuck Messages

OpenTelemetry Integration

PostgreSQL message broker operations are automatically traced when OpenTelemetry is configured:


Best Practices

1. Use JSONB for Production

2. Set Appropriate Visibility Timeout

3. Use Connection Pooling

4. Monitor Queue Depth

Set up alerts for queue depth:

5. Index Your Queue Table

6. Regular Cleanup

Implement cleanup for old messages (if not using auto-vacuum):

7. Use Claim Check for Large Messages

For messages > 100KB, use the Claim Check pattern:

8. Separate Queue Tables for High Volume

For high-volume queues, use dedicated tables:


Comparison with Other Transports

Feature
PostgreSQL
RabbitMQ
Kafka
AWS SQS

Setup Complexity

Low

Medium

High

Low

Throughput

Low-Medium

High

Very High

Medium

Message Size

~1MB

128MB

~1MB

256KB

Persistence

Database

Disk/Memory

Disk

Managed

Routing

Simple

Advanced

Topic-based

Simple

Transactional

Yes (local)

No

No

No

Ordering

Queue-level

Queue-level

Partition-level

FIFO queues

Operational Cost

Low (existing DB)

Medium

High

Pay-per-use

Best For

Low volume, transactional

General messaging

Event streaming

AWS ecosystem


Troubleshooting

Messages Not Being Consumed

Problem: Messages remain in the queue but are not processed.

Solutions:

  1. Check visibility timeout hasn't expired:

  2. Verify consumer is running and subscriptions match queue names

  3. Check database connection pooling isn't exhausted

  4. Review logs for consumer exceptions

High Database Load

Problem: PostgreSQL CPU/disk usage is high.

Solutions:

  1. Verify index exists on (queue, visible_timeout)

  2. Use JSONB instead of JSON for better performance

  3. Reduce BufferSize if retrieving too many messages at once

  4. Consider partitioning the queue table for high volume

  5. Use connection pooling to reduce connection overhead

Messages Processed Multiple Times

Problem: Same message processed by multiple consumers.

Solutions:

  1. Increase visibleTimeout to allow more processing time

  2. Implement Inbox pattern for idempotency

  3. Check for long-running handlers that exceed visibility timeout

  4. Verify only one consumer process per subscription

Slow Message Retrieval

Problem: Consumer polls are slow.

Solutions:

  1. Add index: CREATE INDEX ON brighter_messages(queue, visible_timeout)

  2. Use JSONB instead of JSON

  3. Increase timeOut to reduce polling frequency

  4. Consider using bufferSize > 1 to retrieve multiple messages per poll


Additional Resources

Last updated

Was this helpful?