Smart Test Selection (AI)
Learn how to use BrowserStack’s Smart Test Selection to run a prioritized subset of tests, get faster feedback, and increase test efficiency.
Smart Test Selection uses AI to select and run a small, prioritized subset of your tests based on recent code changes. This helps you get faster feedback on your builds and significantly improves test efficiency.
What is Smart Test Selection?
Instead of running your entire test suite after every code change, which can be slow and expensive, Smart Test Selection intelligently chooses the most relevant tests to run. The goal is to run a smaller, prioritized set of tests that are most likely to identify new defects introduced by your changes.
Switch to our new dashboard experience to access the Build Optimisation Widget and the Smart Test Selection feature.
Prerequisites
Before you can enable Smart Test Selection, ensure you meet the following requirements:
- You have a monorepo (support for multi-repositories coming soon) where your application code and test suite are in the same repository.
- You use the latest version of BrowserStack Java SDK (
v1.40.0
) or Python SDK (v1.31.1
). - You have activated BrowserStack AI in your organization’s settings. For more details, see activate BrowserStack AI preferences.
The following are the supported SDKs for Smart Test Selection:
- TestNG for Java
- Pytest for Python
Using the BrowserStack SDK to integrate with app-automate offers significant benefits in stability, performance, and ease of management. Learn how to integrate your tests with BrowserStack SDK.
Benefits of Smart Test Selection
- Faster feedback loops: Prioritizing the most relevant tests helps teams get immediate insights into critical issues, reducing the time taken to detect and fix problems.
- Increased test efficiency: Running only the most relevant subset of tests helps teams avoid running all tests, which saves time and resources.
- Improved development focus: Quick identification of recurring or new failures allows developers to concentrate on resolving issues that have a higher likelihood of impacting the end-user experience.
Smart Test Selection is currently in beta.
- Supports BrowserStack Automate (Selenium and Playwright) and App Automate (Appium).
- Works for both CI/CD and local runs through the BrowserStack SDK.
How Smart Test Selection works
Smart test selection analyzes the metadata of your code changes (such as file names, paths, and authors) to identify which tests are most likely to be affected. We do not access or store your source code. Based on this analysis, the AI selects a relevant subset of your tests to execute.
You can run the selected tests in one of two modes: relevantFirst
or relevantOnly
.
relevantFirst
mode
Runs the most relevant tests first, followed by the rest of the suite. Use this mode for rapid feedback on critical issues while still ensuring complete test coverage. It provides a safety net by running all tests.
relevantOnly
mode
Runs only the most relevant tests and skips the rest. This mode is best for maximum efficiency, as it significantly reduces execution time, infrastructure usage, and cost.
Enable Smart Test Selection
Follow these steps to enable Smart Test Selection in your project:
-
Update your SDK: Ensure you are using the latest version of the BrowserStack SDK. Supported SDK versions are
v1.40.0
for BrowserStack Java SDK andv1.31.1
for BrowserStack Python SDK. -
Configure
browserstack.yml
: Enable the feature by adding thesmartTestSelection
capability to yourbrowserstack.yml
file. Set the value to your desired mode.
For relevantFirst
mode:
For relevantOnly
mode:
The projectName
and buildName
config must be static and not change across different runs of the same build. This is a deviation in approach as specified by BrowserStack Automate or App Automate as Smart Test Selection needs to identify different runs of the same build.
Restrict the characters in your projectName
and buildName
to alphanumeric characters (A-Z, a-z, 0-9), underscores (_
), colons (:
), square brackets ([
, ]
), and hyphens (-
). Any other character will be replaced with an underscore (_
).
Learning mode
Once you enable Smart Test Selection, the AI enters a learning mode. It analyzes your project’s build and commit history to learn how your code changes affect your tests. During this period, the AI will not select or skip any tests.
To help the AI learn faster, you can provide it with consistent data:
- Frequent builds: Run builds several times a week.
- Diverse commits: Commit a rich history of changes across the codebase.
- Test failures: Allow for some test failures. A suite with a consistent failure rate (~10%) trains the AI faster than one that always passes, as failures provide crucial learning data.
Smart Test Selection is tailored to each project build. If you introduce a new build, the AI will need to learn from it before providing accurate test selections.
Smart Test Selection in action
After the learning mode, the AI is ready to select tests. For each new build, the AI compares the current code changes to the patterns it learned. It only selects tests when it has high confidence in its predictions.
-
Sufficient Overlap: The AI selects and runs the relevant tests based on your chosen mode (
relevantFirst
orrelevantOnly
). -
Insufficient Overlap: The AI skips selection for the current build and uses its data to continue learning.
Viewing the results
Once the AI becomes active, you can monitor its impact on your dashboard. The Build Optimisation Widget shows the percentage of tests skipped and the total time saved.
Frequently Asked Questions (FAQ)
1. Can I configure a list of “must-run” tests that are never skipped?
Not at this time. If you have a set of critical tests that must always be run, we strongly recommend using the relevantFirst
mode. This ensures your critical tests are run as part of the full suite while still benefiting from prioritized feedback. The ability to configure a specific “must-run” list is planned for a future update.
2. How do I know when the “learning mode” is complete?
Currently, the transition from learning to active mode is automatic. Once the AI is active, you will begin to see skipped tests noted in your test reports and data appearing in the Build Optimisation widget on your dashboard.
If you are facing any issues using the feature, contact us.
We're sorry to hear that. Please share your feedback so we can do better
Contact our Support team for immediate help while we work on improving our docs.
We're continuously improving our docs. We'd love to know what you liked
We're sorry to hear that. Please share your feedback so we can do better
Contact our Support team for immediate help while we work on improving our docs.
We're continuously improving our docs. We'd love to know what you liked
Thank you for your valuable feedback!