Selenium with Lettuce

Your guide to running Selenium Webdriver tests with Lettuce on BrowserStack.

Lettuce repo on Github


BrowserStack gives you instant access to our Selenium Grid of 2000+ real devices and desktop browsers. Running your Selenium tests with Lettuce on BrowserStack is simple. This guide will help you:

  1. Run your first test
  2. Mark tests as pass / fail
  3. Debug your app


Before you can start running your Selenium tests with Lettuce, ensure you have the Lettuce libraries installed:

# Install using pip

pip install lettuce

Run your first test

To understand how to integrate with BrowserStack, we will look at two things:

  1. A sample test case written in Lettuce with Python
  2. Integration of this sample test case with BrowserStack

Sample test case

The sample Lettuce test case below searches for the string “BrowserStack” on Google, and checks if the title of the resulting page is “BrowserStack - Google Search”

# Google Feature
Feature: Google Search Functionality
    Scenario: can find search results
        Given I go to ""
            When field with name "q" is given "BrowserStack"
            Then title becomes "BrowserStack - Google Search"
# Google Steps
@step('field with name "(.*?)" is given "(.*?)"')
def fill_in_textfield_by_class(step, field_name, value):
    with AssertContextManager(step):
        elem = world.browser.find_element_by_name(field_name)

@step(u'Then title becomes "([^"]*)"')
def then_title_becomes(step, result):
    title = world.browser.title
    assert_equals(title, result)

Once we have defined the test case, we are ready to integrate this Lettuce test case into BrowserStack.

Integrating with BrowserStack

Note: Running your Selenium tests on BrowserStack requires a BrowserStack Username and Access Key.

To obtain your username and access keys, sign up for a Free Trial or purchase a plan.

We can now integrate our Lettuce test case into BrowserStack. Integration of Lettuce with BrowserStack is made possible by use of following module:

from lettuce import before, after, world
from selenium import webdriver
from browserstack.local import Local
import lettuce_webdriver.webdriver
import os, json

CONFIG_FILE = os.environ['CONFIG_FILE'] if 'CONFIG_FILE' in os.environ else 'config/single.json'
TASK_ID = int(os.environ['TASK_ID']) if 'TASK_ID' in os.environ else 0

with open(CONFIG_FILE) as data_file:
    CONFIG = json.load(data_file)

bs_local = None


def start_local():
    """Code to start browserstack local before start of test."""
    global bs_local
    bs_local = Local()
    bs_local_args = { "key": BROWSERSTACK_ACCESS_KEY, "forcelocal": "true" }

def stop_local():
    """Code to stop browserstack local after end of test."""
    global bs_local
    if bs_local is not None:

def setup_browser(feature):
    desired_capabilities = CONFIG['environments'][TASK_ID]

    for key in CONFIG["capabilities"]:
        if key not in desired_capabilities:
            desired_capabilities[key] = CONFIG["capabilities"][key]

    if 'BROWSERSTACK_APP_ID' in os.environ:
        desired_capabilities['app'] = os.environ['BROWSERSTACK_APP_ID']

    if "browserstack.local" in desired_capabilities and desired_capabilities["browserstack.local"]:

    world.browser = webdriver.Remote(
        command_executor="https://%s:%s@%s/wd/hub" % (BROWSERSTACK_USERNAME, BROWSERSTACK_ACCESS_KEY, CONFIG['server'])

def cleanup_browser(feature):

The module reads from config file where you need to put the BrowserStack Hub URL and credentials.

  "server": "",
  "user": "YOUR_USERNAME",
  "key": "YOUR_ACCESS_KEY",

  "capabilities": {
    "browserstack.debug": true,
    "name": "Bstack-[Lettuce] Sample Test"

  "environments": [{
    "browser": "chrome"

We are now ready to run the test on BrowserStack, using the following command:

# Run using paver
paver run single

Mark tests as pass / fail

BrowserStack provides a comprehensive REST API to access and update information about your tests. Shown below is a sample code snippet which allows you to mark your tests as pass or fail based on the assertions in your Lettuce test cases.

import requests
requests.put('<session-id>.json', data={"status": "completed", "reason": ""})

The two potential values for status can either be completed or error. Optionally, a reason can also be passed. A full reference of our REST API can be found here.

Debug your app

BrowserStack provides a range of debugging tools to help you quickly identify and fix bugs you discover through your automated tests.

  • Text Logs

Text Logs are a comprehensive record of your test. They are used to identify all the steps executed in the test and troubleshoot errors for the failed step. Text Logs are accessible from the Automate dashboard or via our REST API.

  • Visual Logs

Visual Logs automatically capture the screenshots generated at every Selenium command run through your Python script. Visual logs help with debugging the exact step and the page where failure occurred. They also help identify any layout or design related issues with your web pages on different browsers.

Visual Logs are disabled by default. In order to enable Visual Logs you will need to set browserstack.debug capability to true:

capabilities = {
 "browserstack.debug": "true"
  • Video Recording

Every test run on the BrowserStack Selenium grid is recorded exactly as it is executed on our remote machine. This feature is particularly helpful whenever a browser test fails. You can access videos from Automate Dashboard for each session. You can also download the videos from the Dashboard or retrieve a link to download the video using our REST API.

Note: Video recording increases test execution time slightly. You can disable this feature by setting the capability to false.

capabilities = {
 "": "false"

In addition to these logs BrowserStack also provides Raw logs, Network logs, Console logs, Selenium logs, Appium logs and Interactive session. Complete details to enable all the debugging options can be found here

We're sorry to hear that. Please share your feedback so we can do better

Contact our Support team for immediate help while we work on improving our docs.

We're continuously improving our docs. We'd love to know what you liked

Thank you for your valuable feedback

Is this page helping you?


We're sorry to hear that. Please share your feedback so we can do better

Contact our Support team for immediate help while we work on improving our docs.

We're continuously improving our docs. We'd love to know what you liked

Talk to automation expert