Run cross-browser Puppeteer tests in parallel
A guide to cross browser parallel testing with your Puppeteer tests across 100+ desktop browsers on BrowserStack.
Run our sample cross-browser test in parallel
Follow the steps below to run a sample Puppeteer test on BrowserStack infra across multiple os/browser combinations and all at parallel so as to speed up your build:
Step 1: Clone our sample repository and install dependencies (if not already done)
All our sample tests are available on this GitHub repository. The first step is to download this repository on your system and install the dependencies as shown below:
# The following command will clone the repository on your system
git clone https://github.com/browserstack/puppeteer-browserstack.git
cd puppeteer-browserstack
npm install
Step 2: Configure BrowserStack credentials (if not already done)
If you have not created an account yet, you can sign up for a Free Trial or purchase a plan.
All our sample scripts need your BrowserStack credentials to run. Please set the environment variables BROWSERSTACK_USERNAME
and BROWSERSTACK_ACCESS_KEY
with your credentials as shown below:
export BROWSERSTACK_USERNAME="YOUR_USERNAME"
export BROWSERSTACK_ACCESS_KEY="YOUR_ACCESS_KEY"
Alternatively, your can put your credentials in the browserstack.username
and browserstack.accessKey
capabilities in the parallel_test.js
file in the sample repository.
Step 3: Run the sample parallel cross-browser test
After you have configured the credentials and installed the npm dependencies, you can invoke your first Puppeteer test on BrowserStack using the following:
node parallel_test.js
After the test has run, you can access the test results on the BrowserStack Automate dashboard.
Details of the cross-browser parallel test
In this section, we will walk you through the details of the test that you just ran and also explain the changes that you need to make in your existing Puppeteer scripts to make them run on BrowserStack.
The sample script that has run, is shown below (see in GitHub):
const puppeteer = require('puppeteer');
const expect = require('chai').expect;
const main = async (cap) => {
console.log("Starting test -->", cap['name'])
cap['browserstack.username'] = process.env.BROWSERSTACK_USERNAME || 'YOUR_USERNAME';
cap['browserstack.accessKey'] = process.env.BROWSERSTACK_ACCESS_KEY || 'YOUR_ACCESS_KEY';
const browser = await puppeteer.connect({
browserWSEndpoint:`wss://cdp.browserstack.com/puppeteer?caps=${encodeURIComponent(JSON.stringify(cap))}`, // The BrowserStack CDP endpoint gives you a `browser` instance based on the `caps` that you specified
});
/*
* The BrowserStack specific code ends here. Following this line is your test script.
* Here, we have a simple script that opens duckduckgo.com, searches for the word BrowserStack and asserts the result.
*/
const page = await browser.newPage();
await page.goto('https://www.duckduckgo.com');
const element = await page.$('[name="q"]');
await element.click();
await element.type('BrowserStack\n');
await element.press('Enter');
await page.waitForNavigation();
const title = await page.title('');
console.log(title);
try {
expect(title).to.equal("BrowserStack at DuckDuckGo", 'Expected page title is incorrect!');
// following line of code is responsible for marking the status of the test on BrowserStack as 'passed'. You can use this code in your after hook after each test
await page.evaluate(_ => {}, `browserstack_executor: ${JSON.stringify({action: 'setSessionStatus',arguments: {status: 'passed',reason: 'Title matched'}})}`);
} catch {
await page.evaluate(_ => {}, `browserstack_executor: ${JSON.stringify({action: 'setSessionStatus',arguments: {status: 'failed',reason: 'Title did not match'}})}`);
}
await browser.close();
};
// The following capabilities array contains the list of os/browser environments where you want to run your tests. You can choose to alter this list according to your needs
const capabilities = [
{
'browser': 'chrome',
'browser_version': 'latest', // We support chrome v72 and above. You can choose `latest`, `latest-beta`, `latest-1`, `latest-2` and so on, in this capability
'os': 'osx',
'os_version': 'catalina',
'name': 'Chrome latest on Catalina', // The name of your test and build. See browserstack.com/docs/automate/puppeteer/organize-tests for more details
'build': 'puppeteer-build-2'
},
{
'browser': 'firefox',
'browser_version': 'latest', // We support firefox v86 and above. You can choose `latest`, `latest-beta`, `latest-1`, `latest-2` and so on, in this capability
'os': 'osx',
'os_version': 'catalina',
'name': 'Firefox latest on Catalina',
'build': 'puppeteer-build-2'
},
{
'browser': 'edge',
'browser_version': 'latest', // We support edge v80 and above. You can choose `latest`, `latest-beta`, `latest-1`, `latest-2` and so on, in this capability
'os': 'osx',
'os_version': 'catalina',
'name': 'Edge latest on Catalina',
'build': 'puppeteer-build-2'
},
{
'browser': 'chrome',
'browser_version': 'latest-1',
'os': 'Windows',
'os_version': '10',
'name': 'Chrome latest-1 on Win10',
'build': 'puppeteer-build-2'
},
{
'browser': 'firefox',
'browser_version': 'latest-beta',
'os': 'Windows',
'os_version': '10',
'name': 'Firefox beta on Win10',
'build': 'puppeteer-build-2'
},
{
'browser': 'edge',
'browser_version': 'latest',
'os': 'Windows',
'os_version': '10',
'name': 'Edge latest on Win10',
'build': 'puppeteer-build-2'
}]
// The following code loops through the capabilities array defined above and runs your code against each environment that you have specified in parallel
capabilities.forEach(async (cap) => {
await main(cap);
});
You can learn more about how to make your existing Puppeteer scripts run on BrowserStack and you can also learn about how to test against your privately hosted websites.
Next Steps
- Learn how to test localhost and staging websites
- Migrate your existing test suites to run on BrowserStack
- Run your CodeceptJS Puppeteer tests on BrowserStack
- Run your Jest Puppeteer tests on BrowserStack
We're sorry to hear that. Please share your feedback so we can do better
Contact our Support team for immediate help while we work on improving our docs.
We're continuously improving our docs. We'd love to know what you liked
We're sorry to hear that. Please share your feedback so we can do better
Contact our Support team for immediate help while we work on improving our docs.
We're continuously improving our docs. We'd love to know what you liked
Thank you for your valuable feedback!