Get started with screen reader automation in Accessibility Testing
Learn how to set up and use BrowserStack’s screen reader automation for Windows (NVDA).
Screen reader automation is currently in the alpha phase. To get access, raise a request.
Screen reader automation is supported only on Windows systems to automate the NVDA screen reader.
Screen Reader Automation for Windows (NVDA) allows you to automate accessibility testing by simulating screen reader interactions, capturing screen reader output, and validating accessibility metadata directly within your test scripts. This guide will help you get started with setting up and using Screen Reader Automation.
This guide covers the following topics:
How screen reader automation works
BrowserStack provides the browserstack_executor
command, which enables you to have granular control over the screen reader environment within your tests. It is a custom extension that allows your test scripts to send commands to control the screen reader on real BrowserStack devices.
The command can be used only with Windows systems. However, it is language-agnostic.
This feature provides the following capabilities:
-
Check if your website has screen-reader-accessible elements and components: Verify if all elements:
- Have a meaningful speech output.
- Are focusable by the screen reader.
- Are interactable with the screen reader.
- Check if your website is functional for screen reader users across user workflows: All user workflows can be tested to ensure that users can navigate and interact with your website entirely using a screen reader.
- Ensure that changes deployed have not led to regressions in screen reader accessibility: Whenever a change is deployed to your website, you can run your screen reader automation tests to ensure that the changes have not introduced any new accessibility issues.
Implement screen reader automation
Integrate the browserstack_executor
command into your test scripts to perform the following actions:
Enable or disable NVDA
Use the screenReader
action to enable or disable NVDA.
- To enable NVDA:
await driver.executeScript( `browserstack_executor: {"action": "screenReader", "arguments": {"enable": "true"}}` );
- To disable NVDA:
await driver.executeScript( `browserstack_executor: {"action": "screenReader", "arguments": {"enable": "false"}}` );
Simulate screen reader shortcuts
Use the screenReaderAction
action to simulate common screen reader shortcuts, such as navigate, activate, or scroll.
- To simulate a navigate next action:
driver.executeScript( `browserstack_executor: {"action": "screenReaderAction", "arguments": {"command": "next_focusable_item"}}` );
- To simulate an activate action:
driver.executeScript( `browserstack_executor: {"action": "screenReaderAction", "arguments": {"command": "activate"}}` );
- To simulate a navigate previous action:
driver.executeScript( `browserstack_executor: {"action": "screenReaderAction", "arguments": {"command": "previous_focusable_item"}}` );
List of supported shortcuts
The following NVDA shortcuts are supported for screen reader automation:
Action | Shortcut |
---|---|
Next Element | next_focusable_item |
Previous Element | previous_focusable_item |
Select/Activate | activate |
Capture screen reader spoken output
Use the screenReaderSpokenDescription
action to capture the last spoken output from the screen reader during functional testing.
To capture the last spoken output:
await driver.executeScript(
`browserstack_executor: {"action": "screenReaderSpokenDescription"}`
);
The command returns the spoken output as a string, which you can then log, use for assertions, or save to a file for offline review.
Add accessibility assertions
With the captured screen reader data, you can add powerful assertions to your test scripts. This enables you to verify critical accessibility aspects:
- Focusability: Verify that all relevant UI elements receive focus and are accessible to the screen reader.
- Spoken Output Verification: Check that elements have the correct accessibility metadata, such as labels, roles, and hints, and that the actual screen reader output for an element matches your expected accessibility label or announcement.
- Traversal Order: Record the screen reader traversal sequence and ensure that the focus order of UI elements follows a logical reading flow by comparing it to the DOM structure or visual test.
Enable assertions
You can enable assertions in your test scripts by using the browserstack_executor
command with the screenReaderSpokenDescription
action to capture the spoken output of a specific UI element and verify it against your expected values.
To enable assertions:
var assert = require('chai').assert;
Example assertion
Spoken Output Verification:
The following example demonstrates how to capture the spoken output of a UI element and assert that it matches the expected spoken description:
it('should verify screen reader focus order for headers', async () => {
try {
for (let i = 0; i < expectedOrder.length; i++) {
const expected = expectedOrder[i];
const spoken = spokenOrder[i];
// 1. Check if the expected text is in the spoken output
if (!spoken.includes(expected.text)) {
// Using standard error message formatting with color codes
throw new Error(
`\x1b[31m❌ Header text mismatch at index ${i + 1}: expected "${expected.text}", got "${spoken}"\x1b[0m`
);
}
// 2. Check if the expected type is in the spoken output
if (!spoken.includes(expected.type)) {
// Ensure the color reset code is at the end of the message
throw new Error(
`\x1b[31m❌ Header type mismatch at index ${i + 1}: expected type "${expected.type}", got "${spoken}"\x1b[0m`
);
}
}
console.log('✅ Header elements are in correct screen reader focus order.');
} catch (error) {
// Log the message and rethrow to fail the test/promise chain
console.error(error.message);
throw error;
} finally {
// Disable screen reader regardless of test outcome
await driver.executeScript(
`browserstack_executor: {"action": "screenReader", "arguments": {"enable": "false"}}`
);
}
});
How to view screen reader output in the BrowserStack Automate dashboard
You can view the screen reader output captured during your tests in the BrowserStack Automate dashboard. The output is logged in the session logs, allowing you to review the spoken descriptions and any assertions made during the test run.
To view the screen reader output:
- Log in to your BrowserStack Automate dashboard.
- Navigate to the specific test session where you implemented screen reader automation.
- Click on the session to open the session details.
- Go to the “Logs” tab to see the detailed logs of the test execution.
- Look for entries related to the
browserstack_executor
commands, which will include the captured spoken output and any assertion results.
The screen reader output will be displayed in the logs, allowing you to verify the accessibility of your application based on the spoken descriptions captured during the test.
We're sorry to hear that. Please share your feedback so we can do better
Contact our Support team for immediate help while we work on improving our docs.
We're continuously improving our docs. We'd love to know what you liked
We're sorry to hear that. Please share your feedback so we can do better
Contact our Support team for immediate help while we work on improving our docs.
We're continuously improving our docs. We'd love to know what you liked
Thank you for your valuable feedback!