Skip to main content
No Result Found
Connect and get help from 7,000+ developers on our Discord community. Ask the CommunityAsk the Community

Fix failed tests

Analyze CI/CD failures and get suggested code fixes for failing or intermittent tests without leaving the IDE.

Test Companion connects to BrowserStack Test Reporting and Analytics and pulls failure data directly into your IDE. It fetches the root cause analysis for each failure and suggests a code fix. You can also start from the BrowserStack Automate dashboard if you spot a failure while reviewing builds on the web. If you do not have a Test Reporting subscription, paste failure details manually in the chat.

Fix from the Test Reporting panel

Use this method if you have a BrowserStack Test Reporting and Analytics subscription. The process starts in the Failure Analysis panel in the IDE.

  1. Click the Failure Analysis icon in the Test Companion panel.

    Failure Analysis tab in the Test Companion panel showing project and build dropdowns

  2. From the project dropdown, select the project that contains your build.
  3. From the Build dropdown, select the build run that contains your failures.
  4. Filter the test list by clicking Failed to show only failed tests.
  5. Find the test you want to fix. Click Click to view error details to expand the error message.
  6. In the Actions column, click the Fix dropdown. Two options appear:
    • Test Companion: Sends the failure context to the Test Companion chat.
    • Copilot: Sends the failure context to the Copilot chat.

    Failure Analysis panel in the IDE showing the Fix dropdown with Test Companion and Copilot options

  7. Select Test Companion.

The system fetches the Root Cause Analysis (RCA) for the selected test. A loading indicator shows Fetching RCA while the analysis is retrieved.

When the RCA is ready, Test Companion switches to the chat panel and pre-populates the chat input with a structured prompt that includes the test name, suite path, error logs, failure type classification, root cause description, log evidence, evidence strength rating, impact assessment, and a suggested fix. Review the pre-populated prompt and press Enter to start the analysis.

Test Companion reads the RCA, identifies the likely cause, and suggests a code fix in the chat. Review the suggestion, apply it, and run the build again.

If you select Copilot instead of Test Companion, the same RCA is loaded into the Copilot chat window. The failure context and root cause analysis are identical in both paths.

Fix from the Automate dashboard

Use this method if you are reviewing build results in the BrowserStack Automate dashboard and want to fix a failure without switching context manually. This method also requires a BrowserStack Test Reporting and Analytics subscription.

  1. Open the BrowserStack Automate dashboard in your browser.
  2. Select the build that contains the failed test.
  3. Click the Tests tab to view the test list.
  4. Select the failed test. The test detail panel opens on the right side of the screen.
  5. Click the AI Analysis tab.

The AI Analysis tab displays a Root Cause Analysis that includes a summary of why the test failed, a failure type classification (for example, Automation Bug or Product Bug), and a detailed failure analysis.

AI Analysis tab in the BrowserStack Automate dashboard showing root cause analysis and fix options for a failed test

From the AI Analysis panel, choose one of two paths to fix the failure:

  • Fix with Test Companion: Click Fix in VS Code. Test Companion opens in your IDE with the failure context pre-loaded. It reads the root cause analysis, locates the relevant test file in your project, and suggests a code fix.

  • Fix with BrowserStack MCP Server: Click Get Prompt. A pre-built prompt with the failure context is copied to your clipboard. Paste the prompt into any MCP-compatible AI agent in your IDE. The agent uses the BrowserStack MCP Server to read the failure data and generate a fix.

If the same root cause affects multiple tests in the build, click Bulk Apply to similar failures to apply the failure type classification across all matching tests. Then fix the shared root cause once in your test code.

Fix from the chat

Use this method if you do not have a Test Reporting subscription, or if you want to fix a test that failed locally.

  1. Open the Test Companion panel in the IDE.
  2. Paste the error message or describe the failing test in the chat box.

    Provide context using file attachments and @ references

  3. For more accuracy, type @ and attach the relevant test file, source file, or terminal output.
  4. Press Enter.

Test Companion analyzes the failure context, identifies the likely cause, and suggests a code fix.

Review and apply fixes

Regardless of which method you use, Test Companion delivers the fix as a code suggestion in the chat. The suggestion includes:

  • Root cause analysis: A brief explanation of why the test is failing.

  • Suggested code change: The specific lines to modify, with before-and-after context.

Confidence notes: If the fix involves assumptions (for example, a changed selector or a timing issue), Test Companion flags them.

After you apply the fix, run your test suite again to confirm the issue is resolved.

Example prompts

With error context:

Fix this test failure:
Error: expect(received).toBe(expected) // Object.is equality
Expected: 200
Received: 401
The test file is @tests/api/auth.spec.ts

Describing the failure:

My login test is failing because the redirect URL changed from /home to /dashboard
after our last release. Update the test assertion.

Broad failure analysis:

I have 5 failing tests from my last build. The common error is a timeout on element
selectors. What is likely wrong and how do I fix it?

Common failure types

Following are some common failure types that Test Companion can help fix:

  • Stale selectors: An element ID or class changed after a UI redesign. Test Companion updates the selector to match the new element.

  • Timing issues: A test runs before an element is visible or clickable. Test Companion adds an explicit wait or adjusts the wait strategy.

  • Flaky behavior: A test passes intermittently due to a race condition. Test Companion identifies the race and adds stabilization logic.

  • Genuine application bugs: The application behavior is broken. Test Companion confirms the bug and explains what is wrong, rather than patching around it.

Example scenarios

These scenarios show how Test Companion handles real-world failure patterns.

UI redesign breaks selectors

Your navigation header was redesigned. 23 Selenium tests now fail with “element not found” errors because button IDs have changed. Paste the error output and reference the test files with @. Test Companion maps each old selector to its new equivalent and generates a patch file with all updated selectors.

Intermittent (flaky) test failure

A test passes 70% of the time but fails 30% of the time with “Expected element to be visible.” Test Companion identifies a race condition where the test clicks a button before an API call completes. It replaces the hard-coded wait with an explicit wait for network idle and adds a retry strategy for the assertion.

API contract change breaks assertions

The backend team split the full_name field into first_name and last_name. Three test files that assert the presence of full_name now fail. Reference the test files with @ and describe the API change. Test Companion updates every affected assertion across all three files and adjusts the test data fixtures.

Test fails only in CI, passes locally

Your checkout flow test passes locally but times out in CI. Test Companion identifies that the test relies on a local environment variable for the API base URL that is not set in CI. It suggests adding the variable to the CI configuration and updating the test to fail fast with a clear error when the variable is missing.

Best practices

Following are some best practices to get the most accurate and effective fixes from Test Companion:

  • Provide full error output. The more context you give (stack trace, error message, logs), the more accurate the diagnosis.

  • Reference the test file with @. Including the failing test file gives Test Companion access to the actual code, not only the error message.

Fix one issue at a time. If multiple tests are failing for different reasons, address them individually for cleaner, more targeted fixes.

Next steps

  • Automate tests: If the failed test needs to be rewritten, regenerate the automation script.
  • AI settings: Configure auto-approve to let Test Companion apply fixes automatically.

We're sorry to hear that. Please share your feedback so we can do better

Contact our Support team for immediate help while we work on improving our docs.

We're continuously improving our docs. We'd love to know what you liked





Thank you for your valuable feedback

Is this page helping you?

Yes
No

We're sorry to hear that. Please share your feedback so we can do better

Contact our Support team for immediate help while we work on improving our docs.

We're continuously improving our docs. We'd love to know what you liked





Thank you for your valuable feedback!

Talk to an Expert
Download Copy Check Circle