Hybrid load testing using JMeter with WebdriverIO
Run a hybrid load test using WebdriverIO scripts on BrowserStack Load Testing
Prerequisites
- BrowserStack Username and Access key. You can find this under your account profile.
- An existing JMeter script and an automated test suite written with WebdriverIO framework.
Run a test
You can start a new test either from the Quick Start page, or the Load Tests page on the Load testing dashboard.
On the dashboard, click Create Load Test.

Enter a Test Name for your load test, select Hybrid and click Upload scripts

Upload your scripts
The product runs a hybrid test using JMeter and WebdriverIO automation projects. JMeter is used to generate API-level load, and WebdriverIO drives browser-level load to simulate real user interactions.
You see two sections:
-
JMeter Script: Upload your JMeter
.jmxfile (up to 50MB). Drag and drop your file or click to select it. -
Automation Project: Select WebdriverIO from the dropdown as your automation framework. Then, upload your zipped project files (
.zipup to 250MB). Drag and drop the file or click to select it.

After you upload your ZIP file, the dashboard automatically validates your project. If validation is successful, you will see a confirmation message and a summary of the detected configuration and dependencies.
You can review and confirm the configuration file (your webdriverio config file) and any other dependencies (path of package.json) identified in your project.

Ensure that your package.json and wdio.config.js config files are placed at the root level of your project.
Both fields are mandatory for their respective test types. After uploading, click Configure Load to proceed to the next step.
You can also run load tests using the sample scripts if you want to try out the feature before uploading your own files.
Configure environment variables
Use environment variables to avoid hard-coding secrets and to parameterize your test runs:
- Upload one or more
.envfiles (up to 5 MB each) by dragging and dropping into the upload area:- You can upload
.env,.yaml,.yml,.json,.properties,.ini, or.cfgfiles. - After upload, the variables become available to your test run.
- You can upload
- Manually add key–value pairs:
- Click Add Variable, then enter the Key and Value.
- Your variables are saved for this run and applied during execution.
- Use the bin icon to delete any key-value pair

Configure load parameters
Use any of the following Load Profiles for your load test
Select Constant VUs from the dropdowm menu, and set the number of virtual users, test duration, and select load zones.

- Virtual Users: Enter the total number of users to simulate during the test.
- Duration: Specify how long the test should run (in minutes).
Select Ramping VUs from the dropdown menu, and configure Ramp-up, Hold, and/or Ramp-down stages.

Each stage will have 4 parameters which will determine the shape of the load curve:
- Type: Select Ramp-up, Hold, or Ramp-down
- Duration: Specify the duration of the stage
- Start VUs: Specify the number of VUs at the start of the stage
-
Target VUs: Specify the number of VUs at the end of the stage
You can add additional stages as per your test requirement.
Select Per VU Iterations from the dropdowm menu, and set the number of virtual users, iterations, and the maximum test duration.

- Virtual users: Enter the total number of users to simulate during the test.
- Iterations per VU: Specify the times each virtual user runs the script.
- Max duration: Specify the upper bound on the test runtime (in minutes).
- Select load zones: Choose the regions where your tests will run. For each load zone, set the percentage of total load to be distributed. The dashboard visualizes the split with a chart for easy reference.

Capture response details
Use the Capture response details toggle to record full request–response data for failing HTTP calls during the test. The toggle is set to ‘disabled’ by default.

Set thresholds
Use thresholds to decide test status based on performance and reliability metrics:
- Click Thresholds to expand the section.
- Click Add metric for criteria and choose a metric.
- Set the condition (for example, is greater than, is less than) and enter a value.
- Repeat for additional metrics as needed:
Available metrics:
- Total requests (count)
- Error % (percentage)
- Error count (count)
- Avg request rate (req/s)
- Avg response time (ms)
- p90 response time (ms)
- p75 LCP (s)
- p75 INP (ms)
- p75 FCP (s)
- p75 TTFB (ms)
- p75 CLS
After configuring all parameters, click Run Test to start your load test.
Download BrowserStack NodeJS SDK
Run the given command to install the BrowserStack NodeJS SDK in your project
Initialize your project for Load Testing
Run the given command from the root directory of your test project to generate the browserstack-load.yml file which contains the configuration required to define and run your load test:
Configure your Load Test
Open the generated browserstack-load.yml file and update it with the relevant test details. Here’s a sample configuration:
Specify number of virtual users
Set vus to the maximum number of virtual users to simulate during the test.
The max limit for this config is currently 100. Contact us if you want to increase this limit.
Before running a full-scale load test, do a sanity check with a small set of virtual users to validate your configuration and test stability.
Specify the tests
- The tests block defines the combination of test types you want to run as part of a hybrid load test. Specify
WebdriverIOas thetestTypefor one sub-block andJMeterfor another. - For the
WebdriverIOsub-block,-
browserLoadPercent- Specify the percentage of total virtual users to be allocated for Selenium tests. -
language- Set this tonodejs. -
files- Define the key files needed to install dependencies and identify which tests to execute.- Under
dependencies, include the path to files required for environment setup. ForNode.jsprojects, this is typicallypackage.json. - Under
testConfigs, provide the path to yourwdio.conf.js.
- Under
-
- For the
JMetersub-block,-
apiLoadPercent- Specify the percentage of total virtual users to be allocated forJMetertests. -
testScripts- Set the path to the.jmxfile.
-
Set Regions
- Use the
loadzonesub-config to specify each region. For each region, set the traffic percentage using thepercentsub-config. - Make sure that the total percentage equals 100.
| Continent | Region |
loadzone config value to be passed |
|---|---|---|
| North America | US East (Virginia) | us-east-1 |
| North America | US West (North California) | us-west-1 |
| Asia | Asia Pacific (Mumbai) | ap-south-1 |
| Asia | Middle East (UAE) | me-central-1 |
| Europe | EU West (London) | eu-west-2 |
| Europe | EU Central (Frankfurt) | eu-central-1 |
| Australia | Asia Pacific (Sydney) | ap-southeast-2 |
Set duration
Each virtual user (VU) repeatedly executes the test(s) in a loop until the specified duration ends.
How to set duration values:
- If less than 1 minute: use seconds (
_s), e.g.,45s - If less than 1 hour: use minutes and seconds (
_m_s), e.g.,12m30s,10m - If 1 hour or more: use hours, minutes, and seconds (
_h_m_s), e.g.,1h5m20s,1h17m,1h
Default value:
By default, the duration is set to the time required to complete a single iteration of your test(s).
The maximum limit is 20 minutes, which you can extend on request.
You can set a constant duration or configure ramping VUs to model various load scenarios.
Configure Ramping VUs
If you want to gradually increase or decrease VUs over time, use the loadprofile config instead of duration. This approach lets you model realistic traffic patterns such as ramp-ups, peak holds, and ramp-downs.
- Under the
loadProfileblock, settypetorampingto enable staged traffic changes. - Under
stages, add one or more steps:-
type: rampincreases or decreases VUs fromfromtotooverduration. Result: VUs change linearly until they reach the target. -
type: holdkeeps VUs constant forduration. Result: VUs stay at the specified level without change.
-
- Keep stage durations realistic. Very short ramps can cause bursty load that is hard to analyze.
- Ensure your total
vusis high enough to cover the largest stagetovalue. If not, the test cannot reach the target VUs.
Validation rules:
- Each stage must include
type,from,to, andduration. - Use valid time units (
Xs,Xm,Xh). Example:45s,10m,1h5m. -
fromandtomust be non-negative integers and must not exceed the globalvuslimit. - Avoid overlapping or contradictory stages (for example, alternating rapid up/down ramps) unless you are testing resilience to bursty traffic.
Headless mode
- You can configure your WebdriverIO load tests to run in either headless or headful mode using the
headlesscapability. - Headless mode (default): The browser runs without a visible UI, which is faster and consumes fewer resources. Most CI/CD and automated load tests use headless mode.
- Headful mode: The browser UI remains visible during test execution.
- If not specified, tests will run in headless mode by default.
Set environment variables
The env config lets you pass an array of name-value string pairs to set environment variables on the remote machines where tests are executed.
Currently, a maximum of 20 pairs is allowed.
Environment variables can be set using any one of the following two methods:
Inline variables
Declare each variable with a name and value. BrowserStack injects these pairs into the test environment.
File-based variables:
Use sources to reference one or more files containing environment variables (for example, .env files). Use variables to add or override specific pairs. Result: the system loads values from files first, then applies overrides.
Set reporting structure
Use projectName to group related tests under the same project on the dashboard. Use testName to group multiple runs of the same test.
Both projectName and testName must remain consistent across different runs of the same test.
You can use the following characters in projectName and testName:
- Letters (A–Z, a–z)
- Digits (0–9)
- Periods (
.), colons (:), hyphens (-), square brackets ([]), forward slashes (/), at signs (@), ampersands (&), single quotes ('), and underscores (_)
All other characters are ignored.
Capture response details
Set to captureErrorResponses to true to record full request–response data for failing HTTP calls during the test.
Set thresholds
Use thresholds to determine test pass or fail based on metrics.
Available Metric list and corresponding keys for load testing:
- Avg. response time -
avg-response-time - p90 response time -
p90-response-time - Avg. request rate -
avg-request-rate - Total requests -
total-requests - Error % -
error-percentage - Error count -
error-count - p75 LCP-
p75-lcp - p75 FCP-
p75-fcp - p75 INP-
p75-inp - p75 CLS-
p75-cls - p75 TTFB -
p75-ttfb
Available conditions:
=!=><>=<=
We assume the following units for the metrics:
- Avg. response time, p90 response time, INP, TTFB -
ms - Avg. request rate -
reqs/s - Error % -
% - FCP, LCP -
s - Total requests, Error count, CLS - no unit
Run the Load Test
Run the given command to start your test:
Check out the FAQs section to get answers to commonly asked questions.
View test results
Once the test starts running, you’ll get a link to the test report.
Prerequisites
- BrowserStack Username and Access key. You can find this under your account profile.
- An existing k6 script and an automated test suite written with WebdriverIO framework.
Run a test
You can start a new test either from the Quick Start page, or the Load Tests page on the Load testing dashboard.
On the dashboard, click Create Load Test.

Enter a Test Name for your load test, select Hybrid and click Upload scripts

Upload your scripts
The product runs a hybrid test using k6 and WebdriverIO automation projects. k6 is used to generate API-level load, and WebdriverIO drives browser-level load to simulate real user interactions.
You see two sections:
-
k6 Script: Upload your k6
.jsfile (up to 50MB). Drag and drop your file or click to select it. -
Automation Project: Select WebdriverIO from the dropdown as your automation framework. Then, upload your zipped project files (
.zipup to 250MB). Drag and drop the file or click to select it.

After you upload your ZIP file, the dashboard automatically validates your project. If validation is successful, you will see a confirmation message and a summary of the detected configuration and dependencies.
You can review and confirm the configuration file (your webdriverio config file) and any other dependencies (path of package.json) identified in your project.

Ensure that your package.json and wdio.config.js config files are placed at the root level of your project.
Both fields are mandatory for their respective test types. After uploading, click Configure Load to proceed to the next step.
You can also run load tests using the sample scripts if you want to try out the feature before uploading your own files.
Configure environment variables
Use environment variables to avoid hard-coding secrets and to parameterize your test runs:
- Upload one or more
.envfiles (up to 5 MB each) by dragging and dropping into the upload area:- You can upload
.env,.yaml,.yml,.json,.properties,.ini, or.cfgfiles. - After upload, the variables become available to your test run.
- You can upload
- Manually add key–value pairs:
- Click Add Variable, then enter the Key and Value.
- Your variables are saved for this run and applied during execution.
- Use the bin icon to delete any key-value pair

Configure load parameters
Use any of the following Load Profiles for your load test
Select Constant VUs from the dropdowm menu, and set the number of virtual users, test duration, and select load zones.

- Virtual Users: Enter the total number of users to simulate during the test.
- Duration: Specify how long the test should run (in minutes).
Select Ramping VUs from the dropdown menu, and configure Ramp-up, Hold, and/or Ramp-down stages.

Each stage will have 4 parameters which will determine the shape of the load curve:
- Type: Select Ramp-up, Hold, or Ramp-down
- Duration: Specify the duration of the stage
- Start VUs: Specify the number of VUs at the start of the stage
-
Target VUs: Specify the number of VUs at the end of the stage
You can add additional stages as per your test requirement.
Select Per VU Iterations from the dropdowm menu, and set the number of virtual users, iterations, and the maximum test duration.

- Virtual users: Enter the total number of users to simulate during the test.
- Iterations per VU: Specify the times each virtual user runs the script.
- Max duration: Specify the upper bound on the test runtime (in minutes).
- Select load zones: Choose the regions where your tests will run. For each load zone, set the percentage of total load to be distributed. The dashboard visualizes the split with a chart for easy reference.

Capture response details
Use the Capture response details toggle to record full request–response data for failing HTTP calls during the test. The toggle is set to ‘disabled’ by default.

Set thresholds
Use thresholds to decide test status based on performance and reliability metrics:
- Click Thresholds to expand the section.
- Click Add metric for criteria and choose a metric.
- Set the condition (for example, is greater than, is less than) and enter a value.
- Repeat for additional metrics as needed:
Available metrics:
- Total requests (count)
- Error % (percentage)
- Error count (count)
- Avg request rate (req/s)
- Avg response time (ms)
- p90 response time (ms)
- p75 LCP (s)
- p75 INP (ms)
- p75 FCP (s)
- p75 TTFB (ms)
- p75 CLS
After configuring all parameters, click Run Test to start your load test.
Download BrowserStack NodeJS SDK
Run the given command to install the BrowserStack NodeJS SDK in your project
Initialize your project for Load Testing
Run the given command from the root directory of your test project to generate the browserstack-load.yml file which contains the configuration required to define and run your load test:
Configure your Load Test
Open the generated browserstack-load.yml file and update it with the relevant test details. Here’s a sample configuration:
Specify number of virtual users
Set vus to the maximum number of virtual users to simulate during the test.
The max limit for this config is currently 100. Contact us if you want to increase this limit.
Before running a full-scale load test, do a sanity check with a small set of virtual users to validate your configuration and test stability.
Specify the tests
- The tests block defines the combination of test types you want to run as part of a hybrid load test. Specify
WebdriverIOas thetestTypefor one sub-block andk6for another. - For the
WebdriverIOsub-block,-
browserLoadPercent- Specify the percentage of total virtual users to be allocated for Selenium tests. -
language- Set this tonodejs. -
files- Define the key files needed to install dependencies and identify which tests to execute.- Under
dependencies, include the path to files required for environment setup. ForNode.jsprojects, this is typicallypackage.json. - Under
testConfigs, provide the path to yourwdio.conf.js.
- Under
-
- For the
k6sub-block,-
apiLoadPercent- Specify the percentage of total virtual users to be allocated fork6tests. -
testScripts- Set the path to the.jsfile.
-
Set Regions
- Use the
loadzonesub-config to specify each region. For each region, set the traffic percentage using thepercentsub-config. - Make sure that the total percentage equals 100.
| Continent | Region |
loadzone config value to be passed |
|---|---|---|
| North America | US East (Virginia) | us-east-1 |
| North America | US West (North California) | us-west-1 |
| Asia | Asia Pacific (Mumbai) | ap-south-1 |
| Asia | Middle East (UAE) | me-central-1 |
| Europe | EU West (London) | eu-west-2 |
| Europe | EU Central (Frankfurt) | eu-central-1 |
| Australia | Asia Pacific (Sydney) | ap-southeast-2 |
Set duration
Each virtual user (VU) repeatedly executes the test(s) in a loop until the specified duration ends.
How to set duration values:
- If less than 1 minute: use seconds (
_s), e.g.,45s - If less than 1 hour: use minutes and seconds (
_m_s), e.g.,12m30s,10m - If 1 hour or more: use hours, minutes, and seconds (
_h_m_s), e.g.,1h5m20s,1h17m,1h
Default value:
By default, the duration is set to the time required to complete a single iteration of your test(s).
The maximum limit is 20 minutes, which you can extend on request.
Configure Ramping VUs
- Under the
loadProfileblock, settypetorampingto enable staged traffic changes. - Under
stages, add one or more steps:-
type: rampincreases or decreases VUs fromfromtotooverduration. Result: VUs change linearly until they reach the target. -
type: holdkeeps VUs constant forduration. Result: VUs stay at the specified level without change.
-
- Keep stage durations realistic. Very short ramps can cause bursty load that is hard to analyze.
- Ensure your total
vusis high enough to cover the largest stagetovalue. If not, the test cannot reach the target VUs.
Validation rules:
- Each stage must include
type,from,to, andduration. - Use valid time units (
Xs,Xm,Xh). Example:45s,10m,1h5m. -
fromandtomust be non-negative integers and must not exceed the globalvuslimit. - Avoid overlapping or contradictory stages (for example, alternating rapid up/down ramps) unless you are testing resilience to bursty traffic.
Headless mode
- You can configure your WebdriverIO load tests to run in either headless or headful mode using the
headlesscapability. - Headless mode (default): The browser runs without a visible UI, which is faster and consumes fewer resources. Most CI/CD and automated load tests use headless mode.
- Headful mode: The browser UI remains visible during test execution.
- If not specified, tests will run in headless mode by default.
Set environment variables
The env config lets you pass an array of name-value string pairs to set environment variables on the remote machines where tests are executed.
Currently, a maximum of 20 pairs is allowed.
Environment variables can be set using any one of the following two methods:
Inline variables
Declare each variable with a name and value. BrowserStack injects these pairs into the test environment.
File-based variables:
Use sources to reference one or more files containing environment variables (for example, .env files). Use variables to add or override specific pairs. Result: the system loads values from files first, then applies overrides.
Set reporting structure
Use projectName to group related tests under the same project on the dashboard. Use testName to group multiple runs of the same test.
Both projectName and testName must remain consistent across different runs of the same test.
You can use the following characters in projectName and testName:
- Letters (A–Z, a–z)
- Digits (0–9)
- Periods (
.), colons (:), hyphens (-), square brackets ([]), forward slashes (/), at signs (@), ampersands (&), single quotes ('), and underscores (_)
All other characters are ignored.
Capture response details
Set to captureErrorResponses to true to record full request–response data for failing HTTP calls during the test.
Set thresholds
Use thresholds to determine test pass or fail based on metrics.
Available Metric list and corresponding keys for load testing:
- Avg. response time -
avg-response-time - p90 response time -
p90-response-time - Avg. request rate -
avg-request-rate - Total requests -
total-requests - Error % -
error-percentage - Error count -
error-count - p75 LCP-
p75-lcp - p75 FCP-
p75-fcp - p75 INP-
p75-inp - p75 CLS-
p75-cls - p75 TTFB -
p75-ttfb
Available conditions:
=!=><>=<=
We assume the following units for the metrics:
- Avg. response time, p90 response time, INP, TTFB -
ms - Avg. request rate -
reqs/s - Error % -
% - FCP, LCP -
s - Total requests, Error count, CLS - no unit
Run the Load Test
Run the given command to start your test:
Check out the FAQs section to get answers to commonly asked questions.
View test results
Once the test starts running, you’ll get a link to the test report.
We're sorry to hear that. Please share your feedback so we can do better
Contact our Support team for immediate help while we work on improving our docs.
We're continuously improving our docs. We'd love to know what you liked
We're sorry to hear that. Please share your feedback so we can do better
Contact our Support team for immediate help while we work on improving our docs.
We're continuously improving our docs. We'd love to know what you liked
Thank you for your valuable feedback!