Skip to content

test: Performance tests suite #21226

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 14 commits into
base: main
Choose a base branch
from
Open

test: Performance tests suite #21226

wants to merge 14 commits into from

Conversation

anikdhabal
Copy link
Contributor

@anikdhabal anikdhabal commented May 10, 2025

What does this PR do?

Summary by mrge

Added performance tests suite using k6

@graphite-app graphite-app bot requested a review from a team May 10, 2025 20:18
Copy link
Contributor

github-actions bot commented May 10, 2025

Hey there and thank you for opening this pull request! 👋🏼

We require pull request titles to follow the Conventional Commits specification and it looks like your proposed title needs to be adjusted.

Details:

No release type found in pull request title "Perf tests". Add a prefix to indicate what kind of release this pull request corresponds to. For reference, see https://www.conventionalcommits.org/

Available types:
 - feat: A new feature
 - fix: A bug fix
 - docs: Documentation only changes
 - style: Changes that do not affect the meaning of the code (white-space, formatting, missing semi-colons, etc)
 - refactor: A code change that neither fixes a bug nor adds a feature
 - perf: A code change that improves performance
 - test: Adding missing tests or correcting existing tests
 - build: Changes that affect the build system or external dependencies (example scopes: gulp, broccoli, npm)
 - ci: Changes to our CI configuration files and scripts (example scopes: Travis, Circle, BrowserStack, SauceLabs)
 - chore: Other changes that don't modify src or test files
 - revert: Reverts a previous commit

@keithwillcode keithwillcode added the core area: core, team members only label May 10, 2025
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mrge reviewed this PR and found no issues. Review PR in mrge.io.

Copy link

graphite-app bot commented May 10, 2025

Graphite Automations

"Add consumer team as reviewer" took an action on this PR • (05/10/25)

1 reviewer was added to this PR based on Keith Williams's automation.

"Add ready-for-e2e label" took an action on this PR • (08/13/25)

1 label was added to this PR based on Keith Williams's automation.

Copy link

vercel bot commented May 10, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

2 Skipped Deployments
Project Deployment Preview Comments Updated (UTC)
cal ⬜️ Ignored Preview Aug 14, 2025 0:53am
cal-eu ⬜️ Ignored Preview Aug 14, 2025 0:53am

@anikdhabal anikdhabal changed the title Perf tests test: Perf tests suite [WIP] May 10, 2025
@anikdhabal anikdhabal marked this pull request as draft May 10, 2025 20:23
@anikdhabal anikdhabal changed the title test: Perf tests suite [WIP] test: Performance tests suite [WIP] May 12, 2025
@anikdhabal anikdhabal marked this pull request as ready for review May 12, 2025 16:39
@anikdhabal anikdhabal changed the title test: Performance tests suite [WIP] test: Performance tests suite May 12, 2025
@dosubot dosubot bot added automated-tests area: unit tests, e2e tests, playwright performance area: performance, page load, slow, slow endpoints, loading screen, unresponsive labels May 12, 2025
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mrge found 9 issues across 8 files. Review them in mrge.io

}

export function randomSleep(min = SLEEP_DURATION.SHORT, max = SLEEP_DURATION.MEDIUM) {
const sleepTime = Math.random() * (max - min) + min;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing validation that min is less than max

}

export function randomQueryParam() {
return `nocache=${new Date().getTime()}`;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using Date.now() would be more efficient than new Date().getTime()

return group("View Booking Page", () => {
const url = `${BASE_URL}/${username}/${eventSlug}?${randomQueryParam()}`;

const response = http.get(url, {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No timeout specified for HTTP request

Copy link
Contributor

github-actions bot commented Jun 6, 2025

This PR is being marked as stale due to inactivity.

@github-actions github-actions bot added the Stale label Jun 6, 2025
Copy link
Contributor

This PR is being marked as stale due to inactivity.

@github-actions github-actions bot added the Stale label Jun 22, 2025
Copy link
Contributor

coderabbitai bot commented Jul 16, 2025

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

Walkthrough

Adds a Grafana k6 performance test suite for the booking flow: new test scripts organized into smoke, load, stress, and spike under tests/performance; shared utilities at tests/performance/utils (config.js, helpers.js); a local Docker runner script at tests/scripts/run-k6-local.sh; documentation at tests/performance/README.md; and a GitHub Actions workflow .github/workflows/performance-tests.yml to run k6 on release creation or manual dispatch. Tests are environment-driven, include thresholds and staged VU profiles, and support k6 Cloud via repository secrets.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these settings in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between aa48584 and 7cfdcc3.

📒 Files selected for processing (1)
  • .github/workflows/performance-tests.yml (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • .github/workflows/performance-tests.yml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Install dependencies / Yarn install & cache
✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch perf-tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

delve-auditor bot commented Jul 16, 2025

No security or compliance issues detected. Reviewed everything up to 8ed0508.

Security Overview
  • 🔎 Scanned files: 8 changed file(s)
Detected Code Changes
Change Type Relevant files
Configuration changes ► .github/workflows/performance-tests.yml
    Add performance testing workflow
Enhancement ► tests/performance/README.md
    Add performance testing documentation
► tests/performance/load/booking.js
    Add load testing script
► tests/performance/smoke/booking.js
    Add smoke testing script
► tests/performance/spike/booking.js
    Add spike testing script
► tests/performance/stress/booking.js
    Add stress testing script
► tests/performance/utils/config.js
    Add performance test configuration
► tests/performance/utils/helpers.js
    Add performance test helper functions
Refactor ► bookings.e2e-spec.ts
    Refactor booking tests
► assign-all-team-members.e2e-spec.ts
    Simplify test assertions
► teams-event-types.controller.e2e-spec.ts
    Remove redundant test setup

Reply to this PR with @delve-auditor followed by a description of what change you want and we'll auto-submit a change to this PR to implement it.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (7)
tests/performance/utils/config.js (3)

35-40: Hardcoded test credentials pose a security risk.

Test credentials are hardcoded in source code, which is a security vulnerability. Consider using environment variables or a secure configuration file.


47-47: Use Date.now() for better performance.

Date.now() is more efficient than new Date().getTime() for getting timestamps.


50-53: Add validation for min/max parameters.

The function doesn't validate that min is less than max, which could lead to unexpected behavior.

tests/performance/utils/helpers.js (2)

10-12: Specify timeout for HTTP requests.

HTTP requests should have explicit timeout configurations to prevent hanging requests during performance testing.


16-16: Avoid brittle DOM element checks.

The test relies on a specific DOM element (data-testid="day"), making it fragile to UI changes. Consider checking for more stable indicators of successful page load.

tests/performance/spike/booking.js (2)

7-12: Spike test ramp-up pattern may be too aggressive for realistic simulation.

The current spike test rapidly increases from 500 to 5000 VUs in just 2 minutes, which may not accurately represent real-world traffic spike patterns and could overwhelm the system too quickly to gather meaningful performance data.


23-23: Sleep duration is too short for realistic load simulation.

The 0.01 second sleep time may not provide sufficient pause between iterations, potentially causing excessive load beyond what's intended and not reflecting realistic user behavior patterns.

🧹 Nitpick comments (3)
tests/performance/load/booking.js (1)

23-23: Consider increasing sleep duration for more realistic load simulation.

The 0.1 second sleep might be too aggressive and not representative of real user behavior. Consider increasing to 0.5-1 second for more realistic think time.

-  sleep(0.1);
+  sleep(0.5);
tests/performance/README.md (1)

62-62: Consider expanding the test scenarios section.

The documentation mentions that the test suite "focuses specifically on the booking flow" but only describes one scenario. Consider adding more details about what aspects of the booking flow are being tested or potential future scenarios.

tests/performance/stress/booking.js (1)

25-25: Consider making sleep duration configurable.

The sleep duration could be made configurable through the shared config to maintain consistency across different test types and allow for easy adjustment.

You could add a sleep configuration to utils/config.js:

export const SLEEP_DURATION = {
  SMOKE: 0.1,
  LOAD: 0.08,
  STRESS: 0.05,
  SPIKE: 0.01,
};

Then import and use it in the test:

+import { THRESHOLDS, SLEEP_DURATION } from "../utils/config.js";
-  sleep(0.05);
+  sleep(SLEEP_DURATION.STRESS);
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between df30f5c and 8ed0508.

📒 Files selected for processing (8)
  • .github/workflows/performance-tests.yml (1 hunks)
  • tests/performance/README.md (1 hunks)
  • tests/performance/load/booking.js (1 hunks)
  • tests/performance/smoke/booking.js (1 hunks)
  • tests/performance/spike/booking.js (1 hunks)
  • tests/performance/stress/booking.js (1 hunks)
  • tests/performance/utils/config.js (1 hunks)
  • tests/performance/utils/helpers.js (1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (3)
tests/performance/utils/helpers.js (1)
tests/performance/utils/config.js (2)
  • randomQueryParam (46-48)
  • randomSleep (50-53)
tests/performance/load/booking.js (5)
tests/performance/smoke/booking.js (2)
  • options (6-13)
  • options (6-13)
tests/performance/stress/booking.js (2)
  • options (6-20)
  • options (6-20)
tests/performance/spike/booking.js (2)
  • options (6-18)
  • options (6-18)
tests/performance/utils/config.js (2)
  • THRESHOLDS (12-33)
  • THRESHOLDS (12-33)
tests/performance/utils/helpers.js (1)
  • viewBookingPage (6-22)
tests/performance/stress/booking.js (5)
tests/performance/load/booking.js (2)
  • options (6-18)
  • options (6-18)
tests/performance/smoke/booking.js (2)
  • options (6-13)
  • options (6-13)
tests/performance/spike/booking.js (2)
  • options (6-18)
  • options (6-18)
tests/performance/utils/config.js (2)
  • THRESHOLDS (12-33)
  • THRESHOLDS (12-33)
tests/performance/utils/helpers.js (1)
  • viewBookingPage (6-22)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Install dependencies / Yarn install & cache
  • GitHub Check: Security Check
🔇 Additional comments (8)
tests/performance/smoke/booking.js (1)

1-19: LGTM! Well-structured smoke test configuration.

The smoke test configuration is appropriate with 10 VUs for 2 minutes and uses proper thresholds. The test function correctly simulates user behavior with reasonable pacing.

tests/performance/load/booking.js (1)

6-18: LGTM! Well-designed load test stages.

The staged load profile effectively ramps up to 2000 VUs with proper warm-up and cool-down phases. The threshold configuration is appropriate for load testing.

tests/performance/README.md (1)

68-70: All VU counts in README.md match test configurations

The documented virtual user counts in tests/performance/README.md (lines 68–70) have been verified against their respective test scripts and accurately reflect the peak targets:

  • Load Tests: 2,000 VUs
  • Stress Tests: 4,000 VUs
  • Spike Tests: 5,000 VUs

No updates required.

tests/performance/spike/booking.js (2)

1-4: LGTM: Imports are correctly structured.

The imports follow k6 best practices and correctly reference the shared utilities for configuration and helper functions.


14-18: LGTM: Thresholds configuration is appropriate.

The threshold configuration correctly uses the SPIKE-specific thresholds from the shared config, which are appropriately more lenient for spike testing scenarios.

tests/performance/stress/booking.js (3)

3-4: Imports are now valid with the addition of utility files.

The previous concerns about importing from non-existent files are resolved as the utility files are being added in this PR.


6-20: LGTM: Stress test configuration is well-designed.

The staged load pattern provides a realistic stress testing scenario with gradual ramp-up to 4000 VUs, appropriate hold periods, and proper threshold configuration for stress testing conditions.


22-26: LGTM: Test function implementation is consistent.

The test function correctly uses the shared helper function and has a reasonable sleep duration (0.05s) that provides better pacing compared to the spike test.

- Add validation to randomSleep function to ensure min <= max
- Replace new Date().getTime() with Date.now() for better performance
- Move hardcoded test credentials to environment variables
- Add HTTP timeout to requests in helpers.js
- Improve DOM element checking to be less brittle
- Fix spike test sleep duration from 0.01s to 0.1s
- Make GitHub workflow BASE_URL configurable with inputs
- Fix README filename reference from booking_flow.js to booking.js
- Add documentation for new environment variables

Co-Authored-By: anik@cal.com <adhabal2002@gmail.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (2)
tests/performance/utils/config.js (2)

38-43: Placeholders for credentials are acceptable; avoid real secrets.

Defaults use "free"/"pro" placeholders. Keep it that way and ensure no real credentials are committed.


49-51: Good choice using Date.now() for lightweight cache-busting.

Efficient and entirely sufficient for this use case.

🧹 Nitpick comments (3)
tests/performance/README.md (1)

78-86: Add a language to the fenced code block to satisfy markdownlint (MD040).

The interactive menu code block is missing a language identifier.

Apply this diff:

-```
+```text
 Select a test:
   1) smoke
   2) load
   3) stress
   4) spike
   5) all
   0) exit

</blockquote></details>
<details>
<summary>tests/scripts/run-k6-local.sh (2)</summary><blockquote>

`28-30`: **Pass through K6_CLOUD_TOKEN to support cloud runs.**

Forwarding K6_CLOUD_TOKEN makes the same script usable with k6 cloud without local tweaks.



Apply this diff:

```diff
-for v in TOKEN TEST_USER_FREE TEST_PASSWORD_FREE TEST_USER_PRO TEST_PASSWORD_PRO; do
+for v in TOKEN TEST_USER_FREE TEST_PASSWORD_FREE TEST_USER_PRO TEST_PASSWORD_PRO K6_CLOUD_TOKEN; do
   if [ -n "${!v:-}" ]; then ENV_ARGS+=(-e "$v=${!v}"); fi
 done

14-15: Consider pinning the k6 Docker image for reproducibility.

Using an unpinned image tag can yield different results over time. Prefer a fixed version (override-able) to keep test results consistent across runs.

Example (no need to use this exact version; pick the one you standardize on):

-K6_IMAGE="${K6_IMAGE:-grafana/k6}"
+K6_IMAGE="${K6_IMAGE:-grafana/k6:0.49.0}"

Alternatively, keep current default and document that CI/local users should set K6_IMAGE to a pinned tag.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 09f40cd and aa48584.

📒 Files selected for processing (4)
  • tests/performance/README.md (1 hunks)
  • tests/performance/utils/config.js (1 hunks)
  • tests/performance/utils/helpers.js (1 hunks)
  • tests/scripts/run-k6-local.sh (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/performance/utils/helpers.js
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-07-28T11:50:23.946Z
Learnt from: CR
PR: calcom/cal.com#0
File: .cursor/rules/review.mdc:0-0
Timestamp: 2025-07-28T11:50:23.946Z
Learning: Applies to **/*.{ts,tsx} : Flag excessive Day.js use in performance-critical code; prefer native Date or Day.js `.utc()` in hot paths like loops

Applied to files:

  • tests/performance/utils/config.js
📚 Learning: 2025-07-18T08:49:18.779Z
Learnt from: vijayraghav-io
PR: calcom/cal.com#21072
File: .env.example:354-357
Timestamp: 2025-07-18T08:49:18.779Z
Learning: For E2E integration tests that require real third-party service credentials (like Outlook calendar), it's acceptable to temporarily include actual test account credentials in .env.example during feature development and validation, provided there's a clear plan to replace them with placeholder values before final release. Test credentials should be for dedicated test tenants/accounts, not production systems.

Applied to files:

  • tests/performance/utils/config.js
🪛 LanguageTool
tests/performance/README.md

[grammar] ~7-~7: There might be a mistake here.
Context: ...installation/) installed on your machine - Cal.com running locally or a deployed in...

(QB_NEW_EN)


[grammar] ~14-~14: There might be a mistake here.
Context: ...ith minimal load to verify functionality - load/: Tests that simulate expected normal lo...

(QB_NEW_EN)


[grammar] ~15-~15: There might be a mistake here.
Context: ... load (thousands of requests per minute) - stress/: Tests that simulate heavy load to find...

(QB_NEW_EN)


[grammar] ~16-~16: There might be a mistake here.
Context: ...ens of thousands of requests per minute) - spike/: Tests that simulate sudden spikes in t...

(QB_NEW_EN)


[grammar] ~17-~17: There might be a mistake here.
Context: ...ens of thousands of requests per minute) - utils/: Shared utilities and helper functions ...

(QB_NEW_EN)


[grammar] ~127-~127: There might be a mistake here.
Context: ...**: Up to 2,000 concurrent virtual users - Stress Tests: Up to 4,000 concurrent v...

(QB_NEW_EN)


[grammar] ~128-~128: There might be a mistake here.
Context: ...**: Up to 4,000 concurrent virtual users - Spike Tests: Rapid spike to 5,000 conc...

(QB_NEW_EN)


[grammar] ~135-~135: There might be a mistake here.
Context: ...lds to catch any performance regressions - Load Tests: Moderate thresholds for no...

(QB_NEW_EN)


[grammar] ~136-~136: There might be a mistake here.
Context: ...resholds for normal operating conditions - Stress Tests: More lenient thresholds ...

(QB_NEW_EN)


[grammar] ~137-~137: There might be a mistake here.
Context: ...ent thresholds for heavy load conditions - Spike Tests: Most lenient thresholds f...

(QB_NEW_EN)


[grammar] ~144-~144: There might be a mistake here.
Context: ...ation (default: http://localhost:3000) - TEST_USER_FREE: Username for free tier testing (defaul...

(QB_NEW_EN)


[grammar] ~145-~145: There might be a mistake here.
Context: ... for free tier testing (default: free) - TEST_PASSWORD_FREE: Password for free tier testing (defaul...

(QB_NEW_EN)


[grammar] ~146-~146: There might be a mistake here.
Context: ... for free tier testing (default: free) - TEST_USER_PRO: Username for pro tier testing (default...

(QB_NEW_EN)


[grammar] ~147-~147: There might be a mistake here.
Context: ...me for pro tier testing (default: pro) - TEST_PASSWORD_PRO: Password for pro tier testing (default...

(QB_NEW_EN)

🪛 markdownlint-cli2 (0.17.2)
tests/performance/README.md

78-78: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Detect changes
🔇 Additional comments (2)
tests/performance/utils/config.js (2)

53-59: randomSleep: input validation and behavior look solid.

  • min > max check prevents invalid ranges.
  • Allowing min === max is sensible.
  • Using floating sleep is fine for k6.

15-36: THRESHOLDS already mapped to k6 metric names — no change required

Confirmed: THRESHOLDS.HTTP_ERRORS is used as http_req_failed and RESPONSE_TIME.* p95 entries are used as http_req_duration in the test options.

Files inspected:

  • tests/performance/spike/booking.js — http_req_failed: THRESHOLDS.HTTP_ERRORS; http_req_duration: THRESHOLDS.RESPONSE_TIME.SPIKE.p95
  • tests/performance/stress/booking.js — http_req_failed: THRESHOLDS.HTTP_ERRORS; http_req_duration: THRESHOLDS.RESPONSE_TIME.STRESS.p95
  • tests/performance/load/booking.js — http_req_failed: THRESHOLDS.HTTP_ERRORS; http_req_duration: THRESHOLDS.RESPONSE_TIME.LOAD.p95
  • tests/performance/smoke/booking.js — http_req_failed: THRESHOLDS.HTTP_ERRORS; http_req_duration: THRESHOLDS.RESPONSE_TIME.SMOKE.p95

If you intended to enforce p99 as well, add the p99 entries (THRESHOLDS.RESPONSE_TIME.*.p99) to the http_req_duration thresholds in the respective files.

Comment on lines 73 to 96
./scripts/run-k6-local.sh
```

You’ll see an interactive menu:

```
Select a test:
1) smoke
2) load
3) stress
4) spike
5) all
0) exit
```

Alternatively, you can run directly from the CLI:

```bash
# Run smoke tests
./scripts/run-k6-local.sh smoke

# Run all tests
./scripts/run-k6-local.sh all
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix incorrect script path in usage examples (should be tests/scripts, not scripts).

The examples point to ./scripts/run-k6-local.sh, but the script lives at tests/scripts/run-k6-local.sh. This will 404 for users following the README.

Apply this diff:

-./scripts/run-k6-local.sh
+./tests/scripts/run-k6-local.sh
-./scripts/run-k6-local.sh smoke
+./tests/scripts/run-k6-local.sh smoke
-./scripts/run-k6-local.sh all
+./tests/scripts/run-k6-local.sh all
🧰 Tools
🪛 markdownlint-cli2 (0.17.2)

78-78: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🤖 Prompt for AI Agents
In tests/performance/README.md around lines 73 to 96 the usage examples
reference ./scripts/run-k6-local.sh but the actual script path is
tests/scripts/run-k6-local.sh; update all occurrences in the snippet and CLI
examples to use tests/scripts/run-k6-local.sh (e.g. the interactive example and
both ./scripts/run-k6-local.sh smoke and ./scripts/run-k6-local.sh all) so the
README points to the correct script location.

volnei
volnei previously approved these changes Aug 13, 2025
Copy link
Contributor

github-actions bot commented Aug 13, 2025

E2E results are ready!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
automated-tests area: unit tests, e2e tests, playwright core area: core, team members only performance area: performance, page load, slow, slow endpoints, loading screen, unresponsive ready-for-e2e
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants