WEBVTT

00:00:03.000 --> 00:00:07.000
Test failure analysis is a big time sink for many automation teams.

00:00:07.000 --> 00:00:13.000
In many automation setups, it could take up to 28 minutes to debug complex test failures.

00:00:13.000 --> 00:00:15.000
Let's put that into perspective.

00:00:15.000 --> 00:00:22.000
Let's say you run a nightly regression of 1,000 tests that usually has a 2% failure rate.

00:00:22.000 --> 00:00:29.000
Even if 5 out of these test failures are complex in nature, your team could end up spending nearly 3 hours every day analyzing them.

00:00:29.000 --> 00:00:30.000
Why?

00:00:30.000 --> 00:00:36.000
Because failure analysis is still largely manual and often depends on trial and error.

00:00:36.000 --> 00:00:46.000
One has to sift through various logs, check recent commits, and dig through any related tickets, piecing together context across fragmented tools just to understand why the test failed.

00:00:46.000 --> 00:00:51.000
As the number of test cases grow, the problem only gets worse.

00:00:51.000 --> 00:00:57.000
We built BrowserStack Test Reporting and Analytics to help you validate test results and analyze test failures faster.

00:00:57.000 --> 00:01:00.000
Now, we're taking that to the next level,

00:01:00.000 --> 00:01:17.000
introducing the BrowserStack Test Failure Analysis Agent, an AI-powered agent that analyzes, categorizes, and helps you remediate test failures, improving the productivity of your QA workflows by up to 95%.

00:01:17.000 --> 00:01:24.000
Let's say you want to check on a test you'd recently run. You open Test Reporting and Analytics and land on the Overview page.

00:01:24.000 --> 00:01:29.000
This gives you an instant view into the overall stability and performance of your test suite.

00:01:29.000 --> 00:01:35.000
You scroll below to the test run you're interested in, and notice that it hasn't passed the quality gate.

00:01:35.000 --> 00:01:40.000
Quality gates are the validation checks you can set up to block or approve deployments automatically.

00:01:40.000 --> 00:01:44.000
You open the test report to investigate what went wrong.

00:01:44.000 --> 00:01:49.000
Our smart reports help you prioritize critical failures faster.

00:01:49.000 --> 00:01:52.000
We see that two new tests have failed.

00:01:52.000 --> 00:01:58.000
Let's take a look at the test logs to understand why they may have failed.

00:01:58.000 --> 00:02:05.000
At first glance, the error seems like it's due to a function that is timed out after trying for 10 seconds.

00:02:05.000 --> 00:02:14.000
Normally, you'd have to rerun the test or dig through all the logs to figure out what actually went wrong.

00:02:14.000 --> 00:02:21.000
Now, with the Test Failure Analysis Agent, we're making debugging of failed tests even faster.

00:02:21.000 --> 00:02:27.000
With a single click, you can get an AI-powered root cause analysis of the test failure.

00:02:27.000 --> 00:02:33.000
The agent can analyze all the logs that you would have to go through and understand them in the context of the failed test.

00:02:33.000 --> 00:02:40.000
It then gives you an accurate root cause analysis of precisely why the test failed.

00:02:40.000 --> 00:02:49.000
In this case, the agent determined that this is not just a transient network issue, but a systemic infrastructure problem, a test environment issue.

00:02:49.000 --> 00:02:53.000
This would have taken a human much longer to piece together.

00:02:53.000 --> 00:03:01.000
From here, the AI agent categorizes the failure as a product, automation, or environment bug, or any custom bug.

00:03:01.000 --> 00:03:09.000
In this case, it has categorized the failure as an environment issue. But it doesn't just stop there.

00:03:09.000 --> 00:03:14.000
The AI agent also suggests how to fix the issue.

00:03:14.000 --> 00:03:25.000
In the case of code fixes, you can seamlessly pass all of this context and suggested fixes by creating a ticket from the same UI to your bug reporting tool.

00:03:25.000 --> 00:03:35.000
We're continuously enhancing the test failure analysis agent to make it a truly context-aware, self-improving failure debugging companion.

00:03:35.000 --> 00:03:47.000
Soon, it will ingest richer signals like screenshots, DOM snapshots, video logs, build history, and even linked JIRA issues for deeper, more accurate analysis.

00:03:47.000 --> 00:03:52.000
We're expanding integrations to include CI tools like Jenkins and MCP hooks

00:03:52.000 --> 00:04:00.000
so you can trigger the agent and apply suggested code fixes directly within your day-to-day testing workflows.

00:04:00.000 --> 00:04:04.000
And with a conversational interface, you will soon be able to directly interact with the agent

00:04:04.000 --> 00:04:09.000
to provide additional context and ask follow-up questions.

00:04:09.000 --> 00:04:17.000
The test failure analysis agent is available wherever you test, from BrowserStack products like Automate, App Automate, Test Management, and more,

00:04:17.000 --> 00:04:21.000
to your internal testing solutions and third-party test platforms.

00:04:21.000 --> 00:04:26.000
The test failure analysis agent is built on BrowserStack's decades of expertise.

00:04:26.000 --> 00:04:32.000
It scales along with your test suite, so you can resolve failures up to 95% faster.

00:04:32.000 --> 00:04:33.000
Try it today on test reporting and analytics.