Fix failed tests
Analyze test failures from your CI/CD builds and get suggested code fixes without leaving the IDE.
Test Companion connects to BrowserStack Test Reporting & Analytics and pulls failure data directly into your IDE. It analyzes the error, identifies the likely cause, and suggests a code fix. You can also paste failure details manually if you don’t have a Test Reporting subscription.
Fix from the Test Reporting panel
Use this method if you have a BrowserStack Test Reporting & Analytics subscription. The process starts in the Failure Analysis panel, not the chat.
- Open the Test Reporting & Analytics panel in the IDE.
- From the Build dropdown, select the build run that contains your failures.
- Find the test you want to fix. Click it to view the error message, stack trace, and duration.
- Under the Actions column, click Fix.
Clicking Fix automatically switches you to the Test Companion chat panel, loads the full failure context (test name, error message, stack trace, and metadata), and starts the analysis.
Test Companion identifies the likely cause and suggests a code fix in the chat. Review the suggestion, apply it, and run the build again.
Fix from the chat
Use this method if you don’t have a Test Reporting subscription, or if you want to fix a test that failed locally.
- Open the Test Companion panel in the IDE.
-
Paste the error message or describe the failing test in the chat box.

- For more accuracy, type
@and attach the relevant test file, source file, or terminal output. - Press Enter.
Test Companion analyzes the failure context, identifies the likely cause, and suggests a code fix.
Review and apply fixes
In both workflows, Test Companion delivers the fix as a code suggestion in the chat. The suggestion includes:
- Root cause analysis: A brief explanation of why the test is failing.
- Suggested code change: The specific lines to modify, with before-and-after context.
- Confidence notes: If the fix involves assumptions (for example, a changed selector or a timing issue), Test Companion flags them.
After you apply the fix, run your test suite again to confirm the issue is resolved.
Example prompts
With error context:
Fix this test failure:
Error: expect(received).toBe(expected) // Object.is equality
Expected: 200
Received: 401
The test file is @tests/api/auth.spec.ts
Describing the failure:
My login test is failing because the redirect URL changed from /home to /dashboard
after our last release. Update the test assertion.
Broad failure analysis:
I have 5 failing tests from my last build. The common error is a timeout on element
selectors. What is likely wrong and how do I fix it?
Common failure types
- Stale selectors: An element ID or class changed after a UI redesign. Test Companion updates the selector to match the new element.
- Timing issues: A test runs before an element is visible or clickable. Test Companion adds an explicit wait or adjusts the wait strategy.
- Flaky behavior: A test passes intermittently due to a race condition. Test Companion identifies the race and adds stabilization logic.
- Genuine application bugs: The application behavior is broken. Test Companion confirms the bug and explains what is wrong, rather than patching around it.
Example scenarios
These scenarios show how Test Companion handles real-world failure patterns.
UI redesign breaks selectors
Your navigation header was redesigned. 23 Selenium tests now fail with “element not found” errors because button IDs have changed. Paste the error output and reference the test files with @. Test Companion maps each old selector to its new equivalent and generates a patch file with all updated selectors.
Intermittent (flaky) test failure
A test passes 70% of the time but fails 30% of the time with “Expected element to be visible.” Test Companion identifies a race condition where the test clicks a button before an API call completes. It replaces the hard-coded wait with an explicit wait for network idle and adds a retry strategy for the assertion.
API contract change breaks assertions
The backend team split the full_name field into first_name and last_name. Three test files that assert the presence of full_name now fail. Reference the test files with @ and describe the API change. Test Companion updates every affected assertion across all three files and adjusts the test data fixtures.
Test fails only in CI, passes locally
Your checkout flow test passes locally but times out in CI. Test Companion identifies that the test relies on a local environment variable for the API base URL that isn’t set in CI. It suggests adding the variable to the CI configuration and updating the test to fail fast with a clear error when the variable is missing.
Best practices
- Provide full error output. The more context you give (stack trace, error message, logs), the more accurate the diagnosis.
- Reference the test file with @. Including the failing test file gives Test Companion access to the actual code, not just the error message.
- Fix one issue at a time. If multiple tests are failing for different reasons, address them individually for cleaner, more targeted fixes.
Next steps
- Automate tests: If the failed test needs to be rewritten, regenerate the automation script.
- AI settings: Configure auto-approve to let Test Companion apply fixes automatically.
We're sorry to hear that. Please share your feedback so we can do better
Contact our Support team for immediate help while we work on improving our docs.
We're continuously improving our docs. We'd love to know what you liked
We're sorry to hear that. Please share your feedback so we can do better
Contact our Support team for immediate help while we work on improving our docs.
We're continuously improving our docs. We'd love to know what you liked
Thank you for your valuable feedback!