Dev

Dummy OpenTelemetry Trace Generator

Realistic dummy OpenTelemetry trace data is essential when you need to test observability pipelines, stress-test trace collectors, or demo distributed tracing dashboards without touching production systems. This generator produces mock spans with all the fields that matter: trace IDs, span IDs, parent-child relationships, operation names, timestamps, durations, HTTP status codes, and span attributes covering database, cache, queue, and HTTP operations. You control the service name and span count, so you can simulate anything from a single microservice call to a deep, multi-hop request chain. Building out a new Jaeger or Zipkin integration? Feed it generated payloads first. You'll surface mapping errors and missing field assumptions long before real traffic hits the collector. The same applies to OpenTelemetry Collector pipelines: processors, exporters, and sampling rules all behave differently under load, and synthetic spans let you reproduce edge cases on demand. For teams onboarding engineers to distributed tracing concepts, fabricated trace trees are far more instructive than abstract diagrams. A generated checkout-service trace showing a parent HTTP span, a child database query, and a sibling cache lookup makes the parent-child span model tangible in seconds. The output follows OpenTelemetry semantic conventions for attributes, so it integrates cleanly with OTLP-compatible tooling and SDKs. Whether you're populating a demo environment, writing a trace parser, or validating alert rules against specific latency patterns, this generator removes the dependency on live instrumented services.

How to Use

  1. Set the Number of Spans to match the depth of the trace tree you want to simulate (5-15 is realistic for a single request).
  2. Enter your target service name exactly as it appears in your observability platform to ensure dashboard filters align.
  3. Click Generate to produce a set of mock spans with trace IDs, parent-child links, and varied operation types.
  4. Copy the output and paste it into your collector endpoint, test fixture file, or SDK integration under test.
  5. Regenerate as many times as needed — each run produces new unique trace and span IDs for isolated test cases.

Use Cases

  • Stress-testing an OpenTelemetry Collector pipeline before production rollout
  • Populating a Jaeger UI demo with realistic multi-service trace trees
  • Reproducing high-latency spans to test P99 alerting rules in Grafana
  • Writing unit tests for a custom trace exporter or span processor
  • Teaching engineers the parent-child span model with concrete examples
  • Validating Zipkin trace ingestion without deploying instrumented code
  • Generating fixture data for an observability dashboard screenshot or docs
  • Prototyping a trace search UI against predictable, varied span attributes

Tips

  • Generate two batches with different service names, then import both to verify your tracing backend renders a multi-service dependency graph correctly.
  • Keep spanCount below 20 when testing UI rendering; very deep traces (50+ spans) are better suited for storage and ingestion benchmarks.
  • Name your service after a real planned microservice so any dashboard templates you build during testing are production-ready from the start.
  • Look for spans with 4xx and 5xx status codes in the output — use those specifically to validate that your alerting rules fire on error traces.
  • Pair the output with otel-cli to push spans directly to a local Collector without writing any code: otel-cli exec accepts OTLP JSON on stdin.
  • If you need reproducible IDs for snapshot tests, copy a single generated output and commit it as a fixture rather than regenerating each test run.

FAQ

What is OpenTelemetry and why is it used for tracing?

OpenTelemetry is a CNCF open-source framework that standardizes how applications emit traces, metrics, and logs. For distributed tracing, it defines a common data model and wire format so that any compliant backend — Jaeger, Zipkin, Tempo, Honeycomb — can ingest spans without vendor-specific instrumentation. This makes switching or comparing backends much simpler.

What is a trace span in OpenTelemetry?

A span represents one unit of work: it carries a trace ID (shared by all spans in a request), a unique span ID, an optional parent span ID, a start timestamp, a duration, a status code, and key-value attributes. Multiple related spans form a trace tree that shows how a single request flowed through your services.

Are the generated spans compatible with Jaeger or Zipkin?

The spans follow OpenTelemetry semantic conventions and OTLP-style structure, making them directly useful for testing OTLP-compatible backends. For Zipkin's native JSON format or Jaeger's Thrift format, you may need a lightweight mapping step or run them through the OpenTelemetry Collector's transform processor first.

How many spans should I generate for realistic testing?

A typical microservice request produces 5-15 spans: one root HTTP span, a few child DB or cache calls, and occasional queue or external HTTP spans. Set spanCount between 5 and 15 to mimic a single trace. Go higher — 50 to 100 — to simulate batch imports or to load-test a trace storage backend.

Can I use a custom service name for these traces?

Yes. The Service Name field sets the service.name attribute on every generated span, which is how tracing backends group spans into services on their topology maps. Use the exact name you plan to deploy — such as payment-service or api-gateway — so dashboard filters and alert rules match from day one.

How do I send generated spans to a real OpenTelemetry Collector?

Copy the generated span payload, then POST it to your Collector's OTLP HTTP endpoint (default port 4318) with Content-Type: application/json. Alternatively, pipe the output through otel-cli or write a short script using any OpenTelemetry SDK's SpanExporter to forward the spans directly.

Do the spans include attributes for HTTP, database, and cache operations?

Yes. The generator covers the most common span types: HTTP client and server spans with method, URL, and status code; database spans with db.system, db.statement, and db.name; cache spans with net.peer.name; and message queue spans. This variety helps you test attribute-based filtering and sampling rules.

Can I use these traces to test sampling rules?

Generated spans are well-suited for validating tail-based and probabilistic sampling configurations. Because you control span count and service name, you can create predictable trace volumes and check that your sampler retains or drops spans at the expected rate. Adjust the span count to simulate burst traffic scenarios.