Dev

Dummy Prometheus Metrics Generator

Testing Prometheus scrapers, Grafana dashboards, and alerting pipelines requires realistic metrics data before a live service exists. This dummy Prometheus metrics generator produces properly formatted output in the Prometheus exposition format — complete with HELP comments, TYPE declarations, counters, gauges, histograms, and summaries — namespaced under whatever service name you provide. You can paste the output directly into a mock /metrics endpoint and start scraping within seconds. The generated metrics cover common observability signals: HTTP request rates, latency histograms, error counters, database query durations, and system-level resources like CPU and memory. That breadth matters when you're building Grafana panels or PromQL queries and need data spread across realistic label combinations — status codes, methods, routes — rather than a single flat number. For teams writing Alertmanager rules, having fake but plausible metric values lets you trigger threshold conditions on demand. Set a service name matching your actual service label, generate the metrics, and wire them into a Prometheus federation endpoint or a file-based scrape config. Your alert rules fire exactly as they would in production, without waiting for a real incident. Developers onboarding to Prometheus also benefit from inspecting concrete exposition format output. Seeing how a histogram's _bucket, _count, and _sum lines relate to each other — with real label sets attached — is far more instructive than reading documentation alone. Use this generator as a teaching aid or a quick sanity check whenever you're unsure whether your parser handles a specific metric type correctly.

How to Use

  1. Enter your service name in the Service Name field, matching the name used in your Prometheus scrape config or app labels.
  2. Select the metric type category from the Metric Types dropdown — choose HTTP + system for a broad mix or a narrower option if you only need specific signals.
  3. Click the generate button to produce a complete block of Prometheus exposition format output in the results panel.
  4. Copy the output and paste it into a file, a mock server response, or directly into a test fixture used by your integration tests.
  5. Adjust numeric values in the copied text manually to simulate specific conditions like high error rates or latency spikes before scraping.

Use Cases

  • Seeding a mock /metrics endpoint for integration test pipelines
  • Building Grafana dashboard panels before a real service is deployed
  • Triggering Alertmanager threshold rules during rule development
  • Testing PromQL queries that require histogram bucket data
  • Validating custom Prometheus scrape configs and relabeling rules
  • Demonstrating Prometheus metric types during team onboarding sessions
  • Populating a Prometheus federation endpoint in a staging environment
  • Stress-testing a metrics ingestion pipeline with varied label cardinality

Tips

  • Serve the output with `npx serve` or Python's http.server and point a local Prometheus instance at it to get a real scrape cycle running in under two minutes.
  • When testing histogram-based alerts, manually edit the _bucket values so higher le buckets accumulate counts gradually — flat buckets across all le values look unrealistic to PromQL rate() calculations.
  • Use a service name like `payment_service` rather than a generic word; namespaced metric names like payment_service_http_requests_total make Grafana variable templating work without regex edits.
  • Paste the same output into two scrape targets with different instance labels to simulate a multi-replica service and test Grafana panels that aggregate with sum() by (job).
  • If you're building recording rules, generate metrics with the exact names your rules reference, then verify the rule output using promtool check rules before touching a live cluster.
  • For Alertmanager end-to-end tests, combine this generator with a webhook receiver like alertmanager-webhook-logger to confirm the full pipeline from scrape to notification fires correctly.

FAQ

What is the Prometheus exposition format?

It's a plain-text format where each metric block starts with a # HELP line describing the metric, a # TYPE line declaring its type (counter, gauge, histogram, summary), and one or more sample lines with optional labels in curly braces followed by a numeric value and optional timestamp. Prometheus scrapes this format from a /metrics HTTP endpoint.

What is the difference between a counter and a gauge in Prometheus?

A counter only ever increases (or resets to zero on restart), making it ideal for request counts or error totals. A gauge can rise and fall freely, so it suits things like active connections, memory usage, or queue depth. When writing PromQL, use rate() on counters and direct comparison on gauges.

How do I use this output as a real /metrics endpoint?

Copy the generated text, save it to a file (e.g., metrics.txt), then serve it with a simple HTTP server — for example, `python3 -m http.server 9100` pointed at that file. Configure Prometheus with a scrape job targeting localhost:9100/metrics.txt and it will ingest the fake data immediately.

What does a Prometheus histogram actually look like in text format?

A histogram produces multiple lines: one _bucket line per le (less-than-or-equal) boundary label, plus a _count and _sum line. For example, http_request_duration_seconds_bucket{le="0.1"} 240 means 240 requests completed in under 100 ms. The _inf bucket must equal _count.

Can I use these metrics to test Alertmanager rules?

Yes. Write your alert rule using the metric names in the generated output, then scrape the mock endpoint. Adjust the numeric values in the text to exceed your threshold — for instance, set an error rate counter high enough that a rate() expression crosses your alert condition — and watch the alert fire in pending or firing state.

How do I match the service name to my actual Prometheus job label?

Set the Service Name input to the same value you use in your scrape config's job_name field, or in a static_configs labels block. Prometheus will attach that as a job label automatically, so metric names and label sets in dashboards and alert rules will match your production naming without changes.

What metric types does the HTTP + system option generate?

The HTTP + system option produces HTTP-layer metrics — request counts by method and status code, latency histograms by route — plus system metrics like CPU usage, memory consumption, and open file descriptors. This combination mirrors what a typical instrumented microservice exposes and covers most Grafana dashboard template variables.

Is the generated output valid enough for a Prometheus client library to parse?

The output follows the official Prometheus exposition format specification, so any compliant parser — including the Go and Python client libraries' text_string_to_metric_families functions — should parse it without errors. If you spot a parse failure, check that your parser handles both histogram and summary types, as those have multi-line structures.