githubEdit

FAQ

This FAQ addresses common questions about using Brighter and Darker, organized by category. For V10-specific changes, see the V10 Migration Guide.

Table of Contents


Getting Started

How do I get started with Brighter?

Start with the simplest possible setup and add complexity as needed:

  1. Read Show me the code! for quick examples

  2. Start simple: Use Send() and handlers without external messaging

  3. Add external bus: Use PostAsync() with InMemory Outbox (development only)

  4. Add reliability: Switch to DepositPost + database-backed Outbox (production)

  5. Add deduplication: Add Inbox pattern for consumers

  6. Explore samples: Check out the WebAPI Samplearrow-up-right

Philosophy: Don't over-engineer early. Use defaults, avoid premature abstraction, and add features as you need them.

Do I need to write message mappers in V10?

No! In V10, you typically don't need explicit message mappers for JSON serialization.

Brighter V10 provides default message mappers that automatically serialize/deserialize JSON messages:

  • JsonMessageMapper - Binary-mode CloudEvents (default)

  • CloudEventJsonMessageMapper - Structured-mode CloudEvents

When you still need custom mappers:

  • Non-JSON formats (Avro, ProtoBuf)

  • Transform pipelines (ClaimCheck, Compression, Encryption)

  • Custom serialization logic

See: Default Message Mappers

What's the difference between Command, Event, and Query?

  • Command: An instruction to do something (may update state). Has exactly one handler. Example: CreateOrder, UpdateUser

  • Event: A notification that something happened (past tense). Can have multiple handlers. Example: OrderCreated, UserUpdated

  • Query: A request for data (does not update state). Returns a result. Example: GetOrderById, FindUsers

Commands and Events use Brighter (Command Processor). Queries use Darker (Query Processor).

See: Show me the code!

Should I use InMemory options in production?

Generally no - InMemory options (Outbox, Inbox, Scheduler, Transport) are not durable. If your application crashes, you lose data.

InMemory is for:

  • Development and testing (fast, zero dependencies)

  • Demos and experimentation

  • Limited production scenarios where data loss is acceptable

For production, use:

  • Database-backed Outbox/Inbox (SQL Server, PostgreSQL, MySQL, DynamoDB, MongoDB)

  • Production schedulers (Quartz, Hangfire, AWS Scheduler, Azure Scheduler)

  • Real message brokers (RabbitMQ, Kafka, AWS SNS/SQS, Azure Service Bus)

See: InMemory Options

How do I structure my handlers?

Follow these guidelines:

  1. One responsibility per handler - Each handler should do one thing

  2. Use attributes for cross-cutting concerns - Logging, retry, timeouts via attributes

  3. Don't create handler base classes - Use attributes instead of inheritance for common functionality

  4. Keep handlers thin - Delegate to domain services or repositories

  5. Avoid sharing state - Handlers should be stateless (use Request Context for passing data)

Bad (custom base class):

Good (use attributes):


Configuration

What's the difference between AddProducers and AddConsumers?

In V10, configuration was simplified:

  • AddProducers(): Configures message producers (sending messages to external bus). Replaces V9's UseExternalBus()

  • AddConsumers(): Configures message consumers (receiving messages from external bus). Replaces V9's AddServiceActivator()

Example:

See: Basic Configuration, V10 Migration Guide

When should I use Reactor vs Proactor?

Reactor (blocking I/O):

  • Faster per-message performance (no context switches)

  • Better for CPU-bound operations

Proactor (non-blocking I/O):

  • Better throughput (yields threads during I/O)

  • Slightly slower per-message (context switch overhead)

Configure with:

Recommendation: Use Proactor for most scenarios (better scalability). Use Reactor for CPU-intensive workloads.

Specific transports may behave better with particular message pump models. For example, Kafka works better with the Reactor model, and RabbitMQ V7+ with the Proactor model.

See: Reactor and Proactor

How do I configure CloudEvents?

In V10, CloudEvents support is built-in. Configure in your Publication:

Binary vs Structured mode:

  • Binary-mode (default): CloudEvents attributes in headers, data in body. Use with RabbitMQ, Kafka, AMQP.

  • Structured-mode: Entire CloudEvents envelope in JSON body. Use with AWS SNS/SQS (limited headers).

See: CloudEvents Support


Messaging

What's the difference between Post and DepositPost?

  • Post(): Writes to the InMemoryOutbox and then publishes via the transport. No database transaction. Simple, but no guarantees. A sweeper can pick up failed sends, you can run the Sweeper in the same process, as the outbox is local to the process.

  • DepositPost(): Writes to Outbox, you should pass in your database transaction provider to ensure that it participates in the same transaction that writes your entity. Guarantees entity writes and message writes succeed/fail together. You may use ClearOutbox to publish immediately, passing in a list of Ids to publish, or rely on your Sweeper to process un-dispatched messages. Waiting for the Sweeper increased latency because you wait for the next polling loop to publish.

Use Post when:

  • Getting started (simplest approach)

  • Using InMemory Outbox (development)

  • Message loss is acceptable

Use DepositPost when:

  • Production systems

  • Need transactional guarantees

  • Database-backed Outbox

See: Outbox Support

When should I use SendAsync or PublishAsync vs External Bus?

cSendAsync or `PublishAsync:

  • Avoids blocking I/O

  • Increases throughput (thread reuse)

  • Caller waits for result

  • Simple programming model

  • Work lost if process crashes

External Bus (PostAsync / message queue):

  • Hands off work to another process

  • Caller doesn't wait (eventual consistency)

  • Reliable (guaranteed delivery via queue)

  • More complex (async notification of completion)

  • Work survives process crashes

Recommendations:

  • Use async handlers for operations < 200ms

  • Use External Bus for long-running operations (> 200ms)

  • Use External Bus for CPU-bound operations

  • Use External Bus when reliability matters (work survives crashes)

Can I handle multiple message types on one queue/topic?

Yes! Use Dynamic Deserialization with a getRequestType callback:

However, the DataType Channel pattern (one type per channel) is simpler and recommended for most scenarios.

See: Dynamic Message Deserialization

How do I handle large messages?

Use the Claim Check pattern:

  1. Store large payload externally (S3, blob storage)

  2. Send only a reference (claim check) in the message

  3. Receiver retrieves payload using the claim check

With transforms:

See: Default Message Mappers, S3 Luggage Storearrow-up-right


Handlers & Pipelines

How do I pass data between handlers in a pipeline?

Use the Request Context:

Use well-known keys from RequestContextBagNames when available.

See: Using the Context Bag

When should I use Agreement Dispatcher?

Use Agreement Dispatcher when you need dynamic handler selection based on request content or context:

Use cases:

  • Time-based routing (rules change over time)

  • Order journeys (different routes based on order contents)

  • Country-specific business logic

  • Versioning scenarios

  • State-based routing

Example:

Note: You cannot use AutoFromAssemblies() with Agreement Dispatcher - must use Handlers() method.

See: Agreement Dispatcher

How do I iterate over a list of requests to dispatch them?

All Command or Event messages derive from IRequest and ICommand and IEvent respectively. So it may seem natural to create a collection of them, for example List<IRequest>, and then process a set of messages by enumerating over them.

When you try this, you will encounter the issue that we dispatch based on the concrete type of the Command or Event. In other words the type you register via the SubscriberRegistry. Because CommandProcessor.Send() is actually CommandProcessor.Send<T>() you need to provide the concrete type in the call for the compiler to determine the type to use with the cool as the concrete type.

If you try this:

Then you will get this error: "ArgumentException "No command handler was found for the typeof command Brighter.commandprocessor.ICommand - a command should have exactly one handler.""

Now, you don't see this issue if you pass the concrete type in, so the compiler can correctly resolve the run-time type.

So what can you do if you must pass the base class to the Command Processor i.e. because you are using a list.

The workaround is to use the dynamic keyword. Using the dynamic keyword means that the type will be evaluated using RTTI, which will successfully pick up the type that you need.


Resilience & Policies

How do I add retry logic to my handlers?

In V10, use Resilience Pipelines with Polly v8:

1. Configure the pipeline:

2. Apply to handler:

Note: [UsePolicy] and [TimeoutPolicy] are deprecated in V10. Migrate to [UseResiliencePipeline].

See: Resilience Pipelines, V10 Migration Guide

What resilience strategies are available?

Polly v8 provides these strategies (all available via Resilience Pipelines):

  • Retry - Automatic retry with configurable delays

  • Circuit Breaker - Prevent cascading failures

  • Timeout - Limit operation duration

  • Rate Limiter - Control request rate

  • Fallback - Alternative behavior on failure

  • Hedging - Send duplicate requests for low latency

See: Resilience Pipelines

What happened to TimeoutPolicy in V10?

[TimeoutPolicy] is deprecated in V10 and will be removed in V11.

Migrate to Resilience Pipeline:

Old (V9):

New (V10):

See: V10 Migration Guide


Scheduling

What scheduler should I use in production?

For production, use:

  • Quartz.NET - Battle-tested, persistent, distributed, clustering support

  • Hangfire - Persistent, web dashboard, easy setup (⚠️ not strong-named)

  • AWS Scheduler - Serverless, cloud-native (AWS only)

  • Azure Scheduler - Managed service, built into Service Bus (Azure only, no reschedule support)

For development/testing:

  • InMemory Scheduler - Simple, fast, but not durable

Comparison:

Feature
Quartz
Hangfire
AWS
Azure
InMemory

Production-ready

Persistent

Clustering

N/A

N/A

Dashboard

Reschedule

Strong-named

See: Scheduler Support

How do I schedule a message for later?

Use SendAsync() or PostAsync() with a delay:

Note: Requires a configured scheduler (Quartz, Hangfire, AWS, Azure, or InMemory).

See: Scheduler Support

Can I cancel or reschedule a scheduled message?

Yes, using the scheduler ID returned when scheduling:

Cancel:

Reschedule:

Note: Azure Service Bus Scheduler does NOT support reschedule - you must cancel and create a new schedule.

See: Scheduler Support


Migration

How do I migrate from V9 to V10?

Follow the step-by-step V10 Migration Guide.

Key breaking changes:

  1. Nullable Reference Types - Enable in project, address compiler warnings

  2. Configuration Methods - UseExternalBus()AddProducers(), AddServiceActivator()AddConsumers()

  3. Message Pump - runAsync parameter → messagePumpType: MessagePumpType.Reactor/Proactor

  4. Polly - [TimeoutPolicy] deprecated, use [UseResiliencePipeline]

  5. Request Context - New properties added (PartitionKey, CustomHeaders, etc.)

Typical migration time: 1-4 hours

See: V10 Migration Guide

What changed with OpenTelemetry in V10?

V10 now uses OpenTelemetry Semantic Conventions instead of custom conventions.

Breaking changes:

  • Span names changed to follow OTel conventions

  • Attribute names follow paramore.brighter.* and messaging.* namespaces

  • W3C TraceContext propagation (traceparent/tracestate headers)

Benefits:

  • Better interoperability with other systems

  • Standard observability tooling works out-of-the-box

  • CloudEvents integration for trace propagation

See: Telemetry, V10 Migration Guide

Do I need to update my message mappers for V10?

Maybe not! V10 provides default mappers for JSON serialization.

If you have explicit JSON mappers, you can likely remove them and use the default mappers.

Keep custom mappers if:

  • Using non-JSON formats (Avro, ProtoBuf)

  • Using transform pipelines (ClaimCheck, Compression, Encryption)

  • Have custom serialization logic

See: Default Message Mappers, V10 Migration Guide


Performance & Concurrency

When should I use Reactor vs Proactor?

See Configuration section above.

How many message pumps should I configure per queue?

Start with 1 pump per queue and increase based on monitoring.

Considerations:

  • More pumps = higher throughput, more concurrent processing

  • But also = more database connections, more memory, more competing consumers

  • Depends on: message rate, processing time, available resources

Recommendations:

  • Start with 1 pump per queue

  • Monitor: queue depth, processing latency, CPU/memory usage

  • Scale up if: queues backing up, high latency, low resource utilization

  • Scale down if: low message rate, resource constraints

Configure with:

Should I use competing consumers or a single consumer?

Competing Consumers (multiple instances):

  • ✅ Higher throughput

  • ✅ Better fault tolerance (one instance fails, others continue)

  • ✅ Easier to scale horizontally

  • ❌ Messages processed out-of-order (unless using partitions)

Single Consumer:

  • ✅ Guaranteed message ordering

  • ✅ Simpler reasoning about state

  • ❌ Lower throughput

  • ❌ Single point of failure

Recommendation: Use competing consumers with partition keys for ordering when needed.

See: Request Context for partition key configuration


See Also

Last updated

Was this helpful?