Dev
Mock Event Sourcing Payload Generator
The mock event sourcing payload generator creates realistic, fully-structured domain event objects for testing event-driven systems built on event sourcing and CQRS patterns. Each generated payload includes an event ID, aggregate ID, version number, ISO timestamp, event type, and metadata fields such as correlation ID and causation ID — everything a real domain event needs to flow through your system without modification. Stop hand-crafting JSON fixtures or writing custom factories just to get a test suite running. Choose from five domains — ecommerce, user management, payments, inventory, or messaging — and the generator produces contextually accurate event types for that domain. An ecommerce run might yield OrderPlaced, ItemShipped, and CartAbandoned events; a payments run produces PaymentAuthorized, RefundInitiated, and ChargebackRaised. The domain context means your test data reflects the actual event vocabulary your services would encounter in production. Beyond unit tests, these payloads are immediately useful for seeding Kafka topics, populating an event store like EventStoreDB or Marten, or demoing a new microservice consumer to stakeholders. Because every field follows standard naming conventions, you can paste the output directly into Postman, a Kafka producer CLI, or a test fixture file without reformatting. The count control lets you generate anywhere from a single event up to a batch, simulating realistic aggregate history sequences or burst load scenarios. Combined with the domain selector, this makes the generator practical for both exploratory testing during development and structured integration testing in CI pipelines.
How to Use
- Select the domain that matches your system — ecommerce, payments, inventory, user management, or messaging.
- Set the event count to the batch size you need, such as 6 for a quick test or 20 for a load scenario.
- Click Generate to produce a list of fully-structured domain event JSON payloads.
- Copy individual payloads or the full batch and paste into your test fixture, broker producer, or event store client.
- Adjust the domain or count and regenerate to create varied event sequences for different test scenarios.
Use Cases
- •Seeding Kafka or RabbitMQ topics with domain-specific test events
- •Populating EventStoreDB or Marten with sample aggregate histories
- •Writing integration tests for event consumer microservices
- •Demonstrating CQRS read-model projections with realistic input data
- •Load-testing an event store by generating bulk payloads quickly
- •Teaching event sourcing patterns with concrete, readable JSON examples
- •Mocking upstream event streams during local microservice development
- •Bootstrapping test fixtures for saga or process manager implementations
Tips
- →Generate 10+ events in the payments domain, then group by aggregate ID to simulate realistic multi-step transaction histories.
- →After generating, standardize the aggregate ID across 5–6 events to test projection handlers that rebuild state from a single stream.
- →Use the causation ID field to manually chain events — set one event's causation ID to the previous event's ID to represent a saga step.
- →For Kafka testing, generate events in ecommerce and inventory domains separately to simulate two upstream topics feeding one consumer.
- →Combine outputs from multiple domain selections to test event routers or dispatchers that must handle heterogeneous event types.
- →When teaching CQRS, generate 3 user-management events and walk through rebuilding the read model manually — the realistic field names make the lesson concrete.
FAQ
What is event sourcing and why does it need structured payloads?
Event sourcing stores application state as an ordered, immutable log of domain events rather than overwriting database rows. Each event must carry enough context — aggregate ID, version, timestamp, and a typed payload — for any consumer to reconstruct state or trigger downstream reactions. Poorly structured events break consumers, so realistic test data matters from day one.
What is a correlation ID in event metadata?
A correlation ID is a shared identifier attached to every event spawned by the same originating request, even across service boundaries. It enables distributed tracing tools like Jaeger or Zipkin to group all related spans. A causation ID, also included in these payloads, specifically identifies the parent event or command that directly caused this one.
Can I publish these generated payloads directly to Kafka?
Yes. The payloads are standard JSON and require no transformation. Copy the output into a Kafka producer CLI with `kafka-console-producer`, paste into Conduktor or Redpanda Console, or deserialize them in a test using your existing Kafka client. The field names follow common conventions, so most consumer schemas will map without custom configuration.
What domains are available and what event types does each produce?
The five domains are ecommerce (OrderPlaced, CartAbandoned, ItemShipped), user management (UserRegistered, PasswordChanged, AccountSuspended), payments (PaymentAuthorized, RefundInitiated, ChargebackRaised), inventory (StockReserved, WarehouseTransferred, ItemRestocked), and messaging (MessageSent, ConversationClosed, NotificationDelivered). Choosing the right domain ensures event types match your service's actual vocabulary.
What does the version number in each event represent?
The version field tracks the optimistic concurrency position of an event within its aggregate stream. Event 1 on an order aggregate is version 1; the tenth event is version 10. Consumers and projections use this to detect gaps, enforce ordering, and prevent concurrency conflicts when writing back to an event store.
How do I use these payloads to test an event store implementation?
Generate a batch of 10–20 events in the same domain, then write them to your event store using the aggregate ID as the stream key. Verify that reading the stream back returns events in order with matching versions and timestamps. Use mismatched versions deliberately to test optimistic concurrency conflict handling in your append logic.
Are aggregate IDs consistent across multiple events in the same batch?
Each generated event may carry a unique aggregate ID unless you manually standardize them. For testing a single aggregate's history, copy one aggregate ID from the first event and replace it across the batch before ingesting — this simulates a realistic sequence of commands applied to one entity.
Can these payloads be used with CQRS read-model projections?
Absolutely. Feed the generated events into your projection handler in sequence and verify that your read model updates correctly for each event type. Because the domain payloads contain realistic field values, you can also test display logic — formatting an order total, rendering a payment status — without building a full write-side pipeline first.