Integrate Your Test Suite with BrowserStack
BrowserStack Pytest SDK supports a plug-and-play integration. Run your entire test suite in parallel with a few steps!
Prerequisites
Looking for a starter project? Get started with our Pytest sample project.
Integration Steps
Complete the following steps to integrate your Pytest test suite using BrowserStack SDK.
Set BrowserStack credentials
Saving your BrowserStack credentials as environment variables makes it simple to run your test suite from your local or CI environment.
Install BrowserStack Pytest SDK
Execute the following commands to install BrowserStack Pytest SDK for plug-and-play integration of your test suite with BrowserStack.
python3 -m pip install browserstack-sdk
browserstack-sdk setup --framework "pytest" --username "YOUR_USERNAME" --key "YOUR_ACCESS_KEY"
python3 -m pip install browserstack-sdk
browserstack-sdk setup --framework "pytest" --username "YOUR_USERNAME" --key "YOUR_ACCESS_KEY"
Unable to install BrowserStack SDK?
If you can’t install BrowserStack SDK due to sudo
privilege issues, create a virtual environment and execute the adjacent installation commands again.
Mac OS or Linux:
python3 -m venv env
source env/bin/activate
Windows
python3 -m venv env
env\Scripts\activate
Create your BrowserStack config file
Once you have installed the SDK, create a browserstack.yml
config file at the root level of your project. This file holds all the required capabilities to run tests on BrowserStack.
Set platforms to test on
Set the browsers/devices you want to test under the platforms
object. Our config follows W3C formatted capabilities.
Do you want to dynamically configure platforms?
To dynamically configure platforms across different tests, you can comment out the platforms
capability while still passing platform-specific capabilities.
Set number of parallel threads per platform
The parallelsPerPlatform
property determines the number of parallel threads to be executed. BrowserStack’s SDK runner will select the best strategy based on the configured value.
Example 1: If you have configured 3 platforms and set parallelsPerPlatform
as 2: a total of 6 (3 x 2) parallel threads will be used on BrowserStack.
Example 2: If you have configured 1 platform and set parallelsPerPlatform
as 15: a total of 15 (1 x 15) parallel threads will be used on BrowserStack.
Do you want to perform cross-browser testing without test level parallelization?
Remove the parallelsPerPlatform
capability from the configuration file.
Do you want to test parallelization without performing cross-browser testing?
Remove or comment out the platform
capability while keeping the parallelsPerPlatform
capability intact in the configuration file.
Do you want to skip cross-browser testing as well as parallelization?
Remove or comment out the platform
and parallelsPerPlatform
capabilities from the configuration file.
BrowserStack Reporting
You can leverage BrowserStack’s extensive reporting features using the following capabilities:
buildIdentifier | Description | Generated build name on BrowserStack dashboard |
---|---|---|
${BUILD_NUMBER} (Default) | If build is triggered locally, an incremental counter is appended. If build is triggered with CI tools, CI generated build number is appended. |
bstack-demo 1 bstack-demo CI 1395 |
${DATE_TIME} | The timestamp of run time is appended to the build. | bstack-demo 29-Nov-20:44 |
Advanced use cases for Build Names
Custom formatting of Build Name
Prefix buildIdentifier
with desired characters, for example #
or :
buildName: bstack-demo
buildIdentifier: '#${BUILD_NUMBER}'
Re-run tests in a build
You can re-run selected tests from a build using any of the following options:
Option 1: Set the existing build name in the BROWSERSTACK_BUILD_NAME
variable and prepend it to your test run command to re-run tests in the same build:
MacOS/Linux:
BROWSERSTACK_BUILD_NAME=“bstack-demo 123” browserstack-sdk pytest -s tests/bstack-sample-test.py
Windows Powershell:
$env:BROWSERSTACK_BUILD_NAME=“bstack-demo 123”; browserstack-sdk pytest -s tests/bstack-sample-test.py
Windows cmd:
set BROWSERSTACK_BUILD_NAME=“bstack-demo 123” && browserstack-sdk pytest -s tests/bstack-sample-test.py
Option 2: Set the build name as a combination of buildName
and buildIdentifier
, as seen on the dashboard, and set buildIdenitifier
as null
:
buildName: bstack-demo 123
buildIdentifier: null
Option 3: Set the buildIdentifier
as the build number or time of the required build as seen on the dashboard:
buildName: bstack-demo
buildIdentifier: 123
Do you want to enable/disable auto-marking of test status and session?
The sessionName
and sessionStatus
are the names of your test sessions and status of your test sessions respectively. They are automatically picked from your test class/spec names and statuses. They do not need to be set manually when using the BrowserStack SDK. To override the sessionName
and sessionStatus
capabilities, use the following in your browserstack.yml file:
You can configure local testing to start without initializing the BrowserStack binary, or even with an existing binary using a local identifier
testContextOptions:
skipSessionName: true
skipSessionStatus: true
Use additional debugging features
By default, BrowserStack provides prettified session logs, screenshots on every failed selenium command, and a video of the entire test. Additionally, you can enable the following features:
Create browserstack.yml file
Copy the following code snippet and create browserstack.yml
file in the root folder of your test suite.
Use our Capability Generator to select from a comprehensive set of options you can use to customize your tests.
Run your test suite
Prepend browserstack-sdk
before your existing run commands to execute your tests on BrowserStack using the Pytest SDK.
Before
pytest <path-to-test-files>
After
browserstack-sdk pytest <path-to-test-files>
To find out the location of the BrowserStack SDK log files, refer to BrowserStack SDK Log Files. If you are looking for more information, see FAQ documentation.
After you run your test, visit the Automate dashboard to view your test results.
Advanced features and use cases
Here’s a list of features and capabilities you may find useful.
Accept insecure certificates
The acceptInsecureCerts
capability suppresses browser popups warning about self-signed certificates usually found in staging environments.
Capability | Expected values |
---|---|
acceptInsecureCerts |
A boolean. Default is False .True if you want to accept all SSL certificates. |
Change desktop resolution
The resolution
capability changes the default desktop screen resolution for your tests on BrowserStack.
Capability | Description | Expected values |
---|---|---|
resolution |
Set the resolution of your VM before beginning your test | A string. Default resolution is 1024x768 Supported resolutions: Windows (XP, 7): 800x600 , 1024x768 , 1280x800 , 1280x1024 , 1366x768 , 1440x900 , 1680x1050 , 1600x1200 , 1920x1200 , 1920x1080 , and 2048x1536 Windows (8, 8.1, 10): 1024x768 , 1280x800 , 1280x1024 , 1366x768 , 1440x900 , 1680x1050 , 1600x1200 , 1920x1200 , 1920x1080 , and 2048x1536 OS X (Sequoia, Sonoma, Ventura, Monterey, Big Sur, Catalina, Mojave, and High Sierra): 1024x768 , 1280x960 , 1280x1024 , 1600x1200 , 1920x1080 , 2560x1440 , 2560x1600 , and 3840x2160 OS X (All other versions): 1024x768 , 1280x960 , 1280x1024 , 1600x1200 , and 1920x1080
|
Others
Below are few of the additional links to documentation pages that might help with your test scenarios:
Next steps
Once you have successfully integrated your test suite with BrowserStack, you might want to check the following:
- Generate a list of capabilities that you want to use in tests
- Find information about your Projects, Builds and Sessions using our REST APIs
- Set up your CI/CD: Jenkins, Bamboo, TeamCity, Azure, CircleCI, BitBucket, TravisCI, GitHub Actions
We're sorry to hear that. Please share your feedback so we can do better
Contact our Support team for immediate help while we work on improving our docs.
We're continuously improving our docs. We'd love to know what you liked
- RESOURCES
We're sorry to hear that. Please share your feedback so we can do better
Contact our Support team for immediate help while we work on improving our docs.
We're continuously improving our docs. We'd love to know what you liked
Thank you for your valuable feedback!