Skip to content

Machine readable end of test summary #4803

@oleiade

Description

@oleiade

Problem space

The existing --summary-export option is considered unsupported/deprecated and produces output with a schema that's difficult to parse, extract insights from, and build automations around.

Why does it matter?

Our 2024 user research revealed that most long-time users and customers are already parsing the human-readable output format to build automations, which we now explicitly exclude from our support policy. We heard and observed similar behavior in internal projects and teams too. This creates an urgent need for an officially supported alternative.

What we want to build

A versioned, machine-readable test results format with:

  • Structured schema: the results will be in a predefined file format (JSON) and stick to a predefined schema.
  • Independently versioned schema: the schema the results data are structured with should be versioned, supported, and the result should contain that versioning information.
  • Official support commitment
  • Clear documentation

The new summary will supersede --summary-export and serve as canonical way to consume test results in automation.

Considerations

  • As we design the new format, we should keep in mind the possibility to add support for the feature in the k6 cloud command too, under the assumption that when the cloud command has finished the execution, we would receive or product the results in the new machine readable format.

Nice to have

  • Currently, k6 only supports exporting the JSON summary to a file. It would be useful to also support setting the entire k6 run stdout/stderr output to this new machine-readable format.
  • Since one of the goals of the new machine-readable end-of-test summary is to enable automation, we could consider providing a JSON Schema (or similar). This would help us prevent regressions and allow users to easily generate clients in their language of choice.
  • To further support automation, we could expose a set of public Go abstractions for parsing and representing the summary output. This would save users and extension developers from having to implement their own.

What success looks like

k6 can be seamlessly integrated into a variety of tools and platforms. It operates flawlessly with CI/CD pipelines and DevOps workflows. Consistent representation of test results is guaranteed across both open-source and cloud environments, including the k6 cloud run command and its various execution modes.

Solution Space

needs research and experimentation

Our initial hunch is that replicating the same format as the new end-of-test summary, with a couple of additions might be a good enough approach. To comply with some of our requirements, and building upon previous design work, we could approach struct the top-level of the document along the following lines:

{
  "version": 1,
  "config": {
    "execution": "local",
    "script": "scripts/thresholds.js",
    "outputs": [
      {
        "kind": "cloud",
        "destination": "https://ops.grafana.net/a/k6-app/runs/1938289"
      },
      {
        "kind": "json",
        "destination": "end-of-test-summary.json"
      }
    ]
  },
  "results": {
    "thresholds": [],
    "total": { ... },
    "scenarios": {},
  }
}

Note that there have been prior experimentations showcasing some potential ways to address the issue. Albeit having been designed prior to the new end-of-test summary, and being most likely outdated, we believe [they might be a valuable resource.](https://github.com/grafana/k6-export-summary-research)

Sub-issues

Metadata

Metadata

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions