Skip to content

Add execution.test.fail to fail a test, and let it finish #4672

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 9 commits into from
Apr 10, 2025

Conversation

oleiade
Copy link
Contributor

@oleiade oleiade commented Apr 4, 2025

What?

This Pull Request addresses #4062.

Namely, it adds the execution.test.fail method, which marks a test as failed, and let it finish its execution nonetheless, and eventually exit with a specific exit code (110).

The following script:

import http from "k6/http";
import exec from "k6/execution";

export const options = {
	iterations: 10,
};

export default function () {
	http.get("https://quickpizza.grafana.com");

	if (exec.vu.iterationInInstance === 3) {
		console.log("Marking the test as failed from iteration: 3");
		exec.test.fail(
			`iteration ${exec.vu.iterationInInstance}: marked the test as failed`,
		);
	}

	console.log(
		`iteration ${exec.vu.iterationInInstance}: print after (maybe) fail`,
	);
}

Ran with the following command:

go build go.k6.io/k6 && ./k6 run script.js; echo $status

Leads to the following output:


         /\      Grafana   /‾‾/
    /\  /  \     |\  __   /  /
   /  \/    \    | |/ /  /   ‾‾\
  /          \   |   (  |  (‾)  |
 / __________ \  |_|\_\  \_____/

     execution: local
        script: script.js
        output: -

     scenarios: (100.00%) 1 scenario, 1 max VUs, 10m30s max duration (incl. graceful stop):
              * default: 10 iterations shared among 1 VUs (maxDuration: 10m0s, gracefulStop: 30s)

INFO[0000] iteration 0: print after (maybe) fail         source=console
INFO[0000] iteration 1: print after (maybe) fail         source=console
INFO[0000] iteration 2: print after (maybe) fail         source=console
INFO[0000] Marking the test as failed from iteration: 3  source=console
ERRO[0000] test marked as failed: iteration 3: marked the test as failed
INFO[0000] iteration 3: print after (maybe) fail         source=console
INFO[0001] iteration 4: print after (maybe) fail         source=console
INFO[0001] iteration 5: print after (maybe) fail         source=console
INFO[0001] iteration 6: print after (maybe) fail         source=console
INFO[0001] iteration 7: print after (maybe) fail         source=console
INFO[0001] iteration 8: print after (maybe) fail         source=console
INFO[0001] iteration 9: print after (maybe) fail         source=console


  █ TOTAL RESULTS

    HTTP
    http_req_duration.......................................................: avg=129.22ms min=127.9ms  med=129.17ms max=130.83ms p(90)=130.51ms p(95)=130.67ms
      { expected_response:true }............................................: avg=129.22ms min=127.9ms  med=129.17ms max=130.83ms p(90)=130.51ms p(95)=130.67ms
    http_req_failed.........................................................: 0.00%  0 out of 10
    http_reqs...............................................................: 10     5.763858/s

    EXECUTION
    iteration_duration......................................................: avg=173.46ms min=128.07ms med=129.43ms max=569.82ms p(90)=174.89ms p(95)=372.36ms
    iterations..............................................................: 10     5.763858/s
    vus.....................................................................: 1      min=1       max=1
    vus_max.................................................................: 1      min=1       max=1

    NETWORK
    data_received...........................................................: 32 kB  19 kB/s
    data_sent...............................................................: 1.0 kB 601 B/s




running (00m01.7s), 0/1 VUs, 10 complete and 0 interrupted iterations
default ✓ [======================================] 1 VUs  00m01.7s/10m0s  10/10 shared iters
ERRO[0001] test run was marked as failed
110

Note that the test is marked as failed on iteration 3, but doesn't keep neither the 3rd iteration, nor the following ones to be interupted. However, the fail method immediately logs that the test was marked as failed, and eventually, the test run prints an error reporting the test as marked as failed, and exits with code 110.

Why?

The goal of #4062 and this PR specifically is to cater to functional testing needs, and specifically the new assertions API, and opens the door to soft assertions: assertions that do not interrupt the execution, but can be detected and treated as a failed test nonetheless.

Checklist

  • I have performed a self-review of my code.
  • I have commented on my code, particularly in hard-to-understand areas.
  • I have added tests for my changes.
  • I have run linter and tests locally (make check) and all pass.

Checklist: Documentation (only for k6 maintainers and if relevant)

Please do not merge this PR until the following items are filled out.

Related PR(s)/Issue(s)

ref #4062

@oleiade oleiade added this to the v1.0.0 milestone Apr 4, 2025
@oleiade oleiade self-assigned this Apr 4, 2025
@oleiade oleiade requested a review from a team as a code owner April 4, 2025 13:35
@oleiade oleiade requested review from inancgumus and joanlopez and removed request for a team April 4, 2025 13:35
inancgumus
inancgumus previously approved these changes Apr 4, 2025
Copy link
Contributor

@inancgumus inancgumus left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 👍 Will you add t.Parallel to the subtests? Is there a reason for skipping parallel tests?

ankur22
ankur22 previously approved these changes Apr 7, 2025
Copy link
Contributor

@ankur22 ankur22 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 🚀

inancgumus
inancgumus previously approved these changes Apr 7, 2025
@@ -26,6 +27,44 @@ type TestPreInitState struct {
TracerProvider *trace.TracerProvider
Usage *usage.Usage
SecretsManager *secretsource.Manager

// FIXME (@oleiade): is this the way?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not 100% sure either, but perhaps this should belong to Runner or TestRunState, as TestPreInitState sounds more the place for things that remain static right after initialization 🤔

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me make another pass at it, initially I stored it in the TestPreInitState because that's directly accesible at the VU level, but it might be better indeed to bury it a bit more 👀

@joanlopez
Copy link
Contributor

LGTM 👍 Will you add t.Parallel to the subtests? Is there a reason for skipping parallel tests?

Yeah, I agree that duplicating the test setup to permit the parallel execution of tests would be nice, but either way doesn't sound such a big deal.

joanlopez
joanlopez previously approved these changes Apr 8, 2025
Copy link
Contributor

@joanlopez joanlopez left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Despite the two comments I left, it looks mostly good and ready to be shipped!

And, to be clear, I'm not 100% sure neither about what's the best place for TestStatus.

@oleiade
Copy link
Contributor Author

oleiade commented Apr 9, 2025

Cheers folks, I'll just make a quick pass on the code to see if I can avoid using the TestPreInitState and I'll refactor the test to be in Parallel 👍🏻

@oleiade oleiade dismissed stale reviews from joanlopez, inancgumus, and ankur22 via 19ff1ab April 10, 2025 09:44
@oleiade
Copy link
Contributor Author

oleiade commented Apr 10, 2025

Heads-up @inancgumus @ankur22 @joanlopez 👋🏻 :

  1. I have updated the tests according to your comments folks, and they are now parallel
  2. I have spent a couple hours looking into moving the TestStatus down into the Runner/TestRunState, and it would most likely need a pretty big refactor to be possible, and I don't think it's worth the effort or the risk as we're looking to release v1. Thus, and considering I have the plan to make a refactor pass on the run command during cooldown/pre-release, I'll open an issue to look further into it, and would merge the PR as-is 👍🏻

More Context: Like brought up by Joan, a better place could potentially be to have the TestStatus pointer stored in the TestRunState or even the Runner.

For the feature to behave the way we want, we need to ensure that the k6 process only has a single instance of the TestStatus throughout the whole execution, and that we pass that pointer down to the VU state, where the execution module may or may not set it to fail based on the user-script.

However, down the road, the element that sets the State for VUs, is the Runner (which, is actually a VUFactory, rather an actual runner that would run code, and if I remember correctly for multiple executors, there would be multiple runners), and the only piece of state the runner knows about currently is... the PreInitState.

In a future refactor/iteration, we would need to look into whether we want to change that 🙇🏻

@oleiade oleiade requested review from inancgumus and joanlopez April 10, 2025 09:56
@oleiade oleiade requested a review from ankur22 April 10, 2025 09:56
Copy link
Contributor

@ankur22 ankur22 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 🚀

@joanlopez
Copy link
Contributor

However, down the road, the element that sets the State for VUs, is the Runner (which, is actually a VUFactory, rather an actual runner that would run code, and if I remember correctly for multiple executors, there would be multiple runners), and the only piece of state the runner knows about currently is... the PreInitState.

That's what worried me, thus why I suggested exploring that avenue, but with caution. Thanks, and apologies! 🙏🏻

@oleiade oleiade merged commit 80fa0df into master Apr 10, 2025
28 checks passed
@oleiade oleiade deleted the mark-test-as-failed branch April 10, 2025 13:17
@inancgumus
Copy link
Contributor

inancgumus commented Apr 10, 2025

2. I have the plan to make a refactor pass on the run command during cooldown/pre-release, I'll open an issue to look further into it, and would merge the PR as-is 👍🏻

Thanks for the explanation, @oleiade 🙇 Helpful! Is it worth opening that issue now so we remember?

@oleiade
Copy link
Contributor Author

oleiade commented Apr 10, 2025

Thanks for holding me accountable to it @inancgumus 🙇🏻

I just opened a dedicated issue: #4681

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants