Understand test history grouping
Understand how a test execution is uniquely identified, what parameters form the test hash, and how you can tune the grouping.
Accurate detection of unique test executions is crucial for reliable insights. This guide explains how BrowserStack determines the identity of a test execution, the inputs that contribute to this identity, and how you can customize the grouping logic to fit your testing scenarios.
How BrowserStack detects a unique test execution
BrowserStack creates a deterministic test hash to group executions of the same logical test across runs. At a high level, the hash combines the identity of a test with selected environment attributes. The following inputs form the test hash calculation:
- Project: The static project name you configure.
- Build name (sanitized): BrowserStack uses the derived static build name for history mapping. Dynamic parts are removed by heuristics to keep history stable.
- Test identity: The fully qualified test name reported by your framework. This typically includes the file path, scope of the test, and the test name. We recommend you add any parameters in the test name to treat same tests with different parameters as unique tests.
- Platform combination: Device name, OS and version, browser and version.
Build name sanitization for history mapping
To maintain a continuous history even when your CI emits dynamic build names (for example, with dates or commit SHAs), BrowserStack derives a static build name using heuristics that remove volatile tokens. This sanitized value is used for history grouping and build filters across the UI, while you still see the original name on detailed pages.
Why correct unique test detection matters
When the hash is stable and meaningful, you get accurate insights:
- History linking: The history in test listings correctly represents prior outcomes for the same logical test.
- Smart Tags: Reliable calculation of flaky tests, new failures, always failing tests, and performance anomaly detection.
- Build comparison: Apples-to-apples comparisons across runs of the same build name.
- AI RCA and automatic failure categorization: Correct grouping ensures the right failures are explained and clustered.
If the identity is too strict, history and insights lose value. If it is too lenient, unrelated executions merge and insights become noisy.
Examples when you may want different grouping logic
You might need to change the default inputs used in the test hash for specific workflows:
- Cross-version stability: Ensure history is maintained in case of browser/OS version upgrades by ignoring the version or using a regex range.
- Device families: Treat tests on multiple equivalent devices as the same logical test by configuring a device regex or ignoring the device details completely.
How to select a different grouping logic
You can change how various attributes contribute to the test hash from settings. Options include using exact values, configured values via regular expressions, or ignoring a parameter entirely for devices, OS (and version), and browsers (and version).
Follow the steps and examples in customize test history grouping.
Impact of changing the grouping logic
When you change grouping settings, BrowserStack groups test executions for future runs. Expect changes in how history timelines, smart tags, flakiness, new failure, always failing, performance anomaly, build comparison, RCA, failure categorization, dashboards, trends, APIs, filters, notifications, and alerts are calculated for future runs.
Best practices
- To validate the effect safely, apply changes in a demo project, and review a few builds for expected history continuity before and after the change.
- Use stable, human-readable test names. Add test parameters, if any, in the test name to treat same tests with different parameters as unique tests.
- Keep project and build names static. BrowserStack already sanitizes dynamic build names for history mapping, but providing stable names yields the best results.
- Prefer configured values or ignore settings to handle situations like browser updates.
We're sorry to hear that. Please share your feedback so we can do better
Contact our Support team for immediate help while we work on improving our docs.
We're continuously improving our docs. We'd love to know what you liked
We're sorry to hear that. Please share your feedback so we can do better
Contact our Support team for immediate help while we work on improving our docs.
We're continuously improving our docs. We'd love to know what you liked
Thank you for your valuable feedback!