Dev

Mock Feature Flag Config Generator

The mock feature flag config generator produces realistic fake feature toggle configuration objects for testing feature management systems, CI/CD pipelines, and flag evaluation logic without touching production data. Whether you're building a custom toggle framework or integrating a third-party platform, you need varied, believable flag configs to test edge cases reliably. This tool generates flags complete with rollout percentages, per-environment overrides, enabled/disabled states, and creation timestamps across JSON, YAML, and ENV formats. Feature flag testing is where many teams cut corners — relying on a handful of hand-crafted fixtures that don't cover the full range of real-world states. A flag can be 0% rolled out in staging, 50% in canary, and 100% in production simultaneously. Without realistic test data covering those permutations, your evaluation logic has blind spots. This generator fills those gaps by producing diverse, structurally valid configs at scale. Engineers integrating tools like LaunchDarkly, Unleash, Flagsmith, or homegrown toggle systems often need seed data for both unit tests and UI development. Copying production flag configs raises privacy and compliance concerns. Generated mock data sidesteps that entirely — use it freely in CI pipelines, Storybook stories, or database seed scripts. Set the flag count to match your test suite's needs and choose the output format that matches your config loading strategy. Increase the count for stress-testing bulk flag evaluation, or keep it small for targeted unit test fixtures. The output is copy-paste ready.

How to Use

  1. Set the Number of Flags input to match how many flag objects your test scenario or UI requires.
  2. Select your Output Format — JSON for API mocking, YAML for Kubernetes configs, or ENV for serverless deployments.
  3. Click Generate to produce a complete, randomized feature flag configuration object.
  4. Copy the output and paste it into your fixture file, seed script, or test setup block.
  5. Regenerate as many times as needed to get different rollout percentages and flag states for broader test coverage.

Use Cases

  • Seeding a feature flag admin dashboard UI with realistic sample data
  • Generating fixture files for unit testing flag evaluation logic
  • Mocking LaunchDarkly or Unleash configs in integration test suites
  • Stress-testing bulk flag resolution with 50+ randomly generated flags
  • Populating a local Flagsmith or Flipt instance during development setup
  • Creating YAML fixtures for Kubernetes feature flag ConfigMap testing
  • Building Storybook stories that depend on feature toggle state variations
  • Testing ENV-based feature flag parsing in serverless function deployments

Tips

  • Generate the same flag count twice and diff the outputs to build test cases that verify your system handles config changes correctly.
  • For ENV format output, pipe it directly into a `.env.test` file — most test runners load this automatically without extra setup.
  • When testing partial rollout logic, look for flags with 1-99% rollout values in the output; these are the edge cases most evaluation bugs hide in.
  • Combine JSON output with a schema validator like Ajv or Zod in your test suite to confirm your flag parser handles every generated shape without errors.
  • Generate 20+ flags when building a flag management UI — you need enough data to trigger pagination, overflow states, and search filtering realistically.
  • For LaunchDarkly SDK mock testing, use the JSON output as the response body in an MSW or nock interceptor to simulate the flag evaluation API.

FAQ

How do I generate mock feature flag data for unit tests?

Set the flag count to match the number of flags your test scenario requires, select JSON or YAML, and click Generate. Copy the output directly into a fixture file or paste it into your test setup block. For table-driven tests, generate a larger batch and slice out the specific flag states you need to cover enabled, disabled, and partial rollout cases.

What formats does the feature flag config generator support?

The generator outputs JSON, YAML, and ENV formats. JSON suits JavaScript and Python applications loading configs from files or APIs. YAML is ideal for Kubernetes ConfigMaps and Helm chart values. ENV format matches serverless and twelve-factor app patterns where feature flags are read from process environment variables.

Can I use this to mock LaunchDarkly or Unleash flag configs?

Yes. The JSON structure mirrors the flag config objects returned by popular feature management platforms. It includes fields like flag key, enabled state, rollout percentage, and environment-specific overrides — the core properties LaunchDarkly and Unleash evaluate. You may need to rename a field or two to match your SDK's exact schema, but the structure transfers cleanly.

How many feature flags should I generate for testing?

For unit tests targeting specific logic, 5-10 flags with controlled states work best. For stress-testing bulk flag evaluation performance, generate 50-100 flags and feed the full config object through your resolution layer. For UI development, 8-15 flags give enough variety to show pagination, search, and filtering behavior without overwhelming the interface.

Are the rollout percentages in the output random or structured?

Rollout percentages are randomized to reflect realistic distribution — you'll see a mix of 0%, 100%, and partial rollouts like 25% or 50%. This is intentional: real flag configs rarely have uniform values, and your evaluation logic should handle the full range. If you need a specific percentage, manually edit the generated output before using it.

Can I use generated flag configs to seed a database or local dev environment?

Yes. The JSON output is structured consistently, making it straightforward to parse and insert into a feature flag store or relational database. Run the generator, save the output as a seed file, and load it during your dev environment bootstrap process. This avoids copying production flags, which can create compliance issues in regulated environments.

What fields are included in each generated feature flag object?

Each flag includes a machine-readable key, a human-readable name, enabled state, rollout percentage, per-environment overrides, and a creation timestamp. These cover the properties most flag evaluation engines need to make targeting decisions and that admin UIs need to display flag status across environments.