Selenium with Protractor

A guide to running Selenium Webdriver tests with Protractor on BrowserStack.

Note: Code samples in this guide can be found in the protractor-browserstack sample repo on GitHub

Introduction

BrowserStack gives you instant access to our Selenium Grid of 2000+ real devices and desktop browsers. Running your Selenium tests with Protractor on BrowserStack is simple. This guide will help you:

  1. Run your first test
  2. Integrating your tests with BrowserStack
  3. Mark tests as passed or failed
  4. Debug your app

Prerequisites

Before you can start running your Selenium tests with Protractor, install Protractor using npm

# Install using npm
npm install protractor

Run your first test

Note: Testing on BrowserStack requires username and access key that can be found in account settings.
If you have not created an account yet, you can sign up for a Free Trial or purchase a plan.

To run your first Protractor test on BrowserStack, follow the steps below:

  1. Clone the protractor-browserstack sample repo on GitHub using:

    git clone https://github.com/browserstack/protractor-browserstack.git
    cd protractor-browserstack
    
  2. Install the dependencies using npm install
  3. Setup your credentials in the protractor-browserstack/conf/single.conf.js file as shown below:

    single.conf.js
    exports.config = {
      ...
      'browserstackUser': 'YOUR_USERNAME',
      'browserstackKey': 'YOUR_ACCESS_KEY',
      ...
    }
    

    Alternatively, you can set the environment variables in your system as shown below:

    export BROWSERSTACK_USERNAME="YOUR_USERNAME"
    export BROWSERSTACK_ACCESS_KEY="YOUR_ACCESS_KEY"
    

    Note: Make sure the environment variable is set permanently

  4. Run your first test using the following command:
    ./node_modules/.bin/protractor conf/single.conf.js
    

You can visit BrowserStack Automate Dashboard and see your test there once it has successfully completed.

Details of your first test

The sample test that you just ran can be found in protractor-browserstack/specs/single.js. The test case below searches for the string “BrowserStack” on Google, and checks if the title of the resulting page is “BrowserStack - Google Search”:

describe('Google\'s Search Functionality', function() {
  it('can find search results', function() {
    browser.driver.get('https://google.com/ncr').then(function() {
      browser.driver.findElement(by.name('q')).sendKeys('BrowserStack\n').then(function() {
        expect(browser.driver.getTitle()).toEqual('BrowserStack - Google Search');
      });
    });
  });
});

Integrating your tests with BrowserStack

In the sample repository, you can find conf/single.conf.js file which is responsible for configuring your test to run on BrowserStack. The useful sections of the file are shown below which enable the tests to run on BrowserStack:

exports.config = {
  'specs': [ '../specs/single.js' ],  // This line specifies the test specs that would run

  // The following two variables should be set with BrowserStack credentials for the test to run on BrowserStack devices
  'browserstackUser': process.env.BROWSERSTACK_USERNAME || 'BROWSERSTACK_USERNAME',
  'browserstackKey': process.env.BROWSERSTACK_ACCESS_KEY || 'BROWSERSTACK_ACCESS_KEY',

  // The following are a set of BrowserStack capabilities that need to be set. You can set more capabilities using https://www.browserstack.com/automate/capabilities
  'capabilities': {
    'build': 'protractor-browserstack',
    'name': 'single_test',
    'browserName': 'chrome',
    'resolution': '1024x768',
    'browserstack.debug': 'true'
  },

  // Code to mark the status of test on BrowserStack based on test assertions
  onComplete: function (passed) {
    if (!passed) {
      browser.executeScript('browserstack_executor: {"action": "setSessionStatus", "arguments": {"status":"failed","reason": "At least 1 assertion has failed"}}');
    }
    if (passed) {
      browser.executeScript('browserstack_executor: {"action": "setSessionStatus", "arguments": {"status":"passed","reason": "All assertions passed"}}');
    }
  }
};

Mark tests as passed or failed

BrowserStack does not know whether your test’s assertions have passed or failed because only the framework knows whether the assertions have passed.

In the *.conf.js file that is used to run your WebdriverIO tests, the following snippet is used to mark tests as passed or failed depending on the assertion status of your tests:

onComplete: function (passed) {
  if (!passed) {
    browser.executeScript('browserstack_executor: {"action": "setSessionStatus", "arguments": {"status":"failed","reason": "At least 1 assertion has failed"}}');
  }
  if (passed) {
    browser.executeScript('browserstack_executor: {"action": "setSessionStatus", "arguments": {"status":"passed","reason": "All assertions passed"}}');
  }
}

The above onComplete function is invoked after every Protractor test that is executed. Based on the status of the assertions, a javascript executor is fired which marks the status of your test on BrowserStack.

Marking test as pass/fail is also possible using our REST API at any point in the test or also after the test has concluded. You can read more about marking test using REST API and use it if it fits your use case.

Debug your app

BrowserStack provides a range of debugging tools to help you quickly identify and fix bugs you discover through your automated tests. Learn more about how to debug tests on BrowserStack using the Automate Dashboard.

Text logs

Text Logs are a comprehensive record of your test. They are used to identify all the steps executed in the test and troubleshoot errors for the failed step. Text Logs are accessible from the Automate dashboard or via our REST API.

Visual logs

Visual Logs automatically capture the screenshots generated at every Selenium command run through your Protractor tests. Visual logs help with debugging the exact step and the page where failure occurred. They also help identify any layout or design related issues with your web pages on different browsers.

Visual Logs are disabled by default. In order to enable Visual Logs you will need to set browserstack.debug capability to true.

'capabilities': {
  'browserstack.debug': 'true'
}

Video recording

Every test run on the BrowserStack Selenium grid is recorded exactly as it is executed on our remote machine. This feature is particularly helpful whenever a browser test fails. You can access videos from Automate Dashboard for each session. You can also download the videos from the Dashboard or retrieve a link to download the video using our REST API.

Note: Video recording increases test execution time slightly. You can disable this feature by setting the browserstack.video capability to false.

Console logs

Console Logs capture the browser’s console output at various steps of the test to troubleshoot javascript issues. You can retrieve Console Logs using an URL that you can get from our REST API. You will also be able to download logs from Automate Dashboard.

Console Logs are enabled with log level set to ‘errors’ by default. To set different log levels, you need to use the capability browserstack.console with values disable, errors, warnings, info or verbose, as shown below:

'capabilities': {
  'browserstack.console': 'errors' // You can choose the log-level here
}

Network logs

Network Logs capture the browser’s performance data such as network traffic, latency, HTTP requests and responses in the HAR format. You can download network logs using a link that you can get from our REST API or from the Automate Dashboard. You can visualize HAR files using the HAR Viewer.

Network Logs are disabled by default. To enable Network Logs use the capability browserstack.networkLogs with the value true, as shown below:

'capabilities': {
  'browserstack.networkLogs': 'true'
}

In addition to these logs BrowserStack also provides Raw logs, Selenium logs, Appium logs and Interactive session. You can find the complete details to enable all the debugging options.

Next steps

Once you have successfully run your first test on BrowserStack, you might want to do one of the following:

We're sorry to hear that. Please share your feedback so we can do better







Contact our Support team for immediate help while we work on improving our docs.

We're continuously improving our docs. We'd love to know what you liked






Thank you for your valuable feedback

Is this page helping you?

Yes
No

We're sorry to hear that. Please share your feedback so we can do better







Contact our Support team for immediate help while we work on improving our docs.

We're continuously improving our docs. We'd love to know what you liked






Thank you for your valuable feedback!

Talk to automation expert
Talk to an Expert