Skip to main content

Stability widget in custom dashboards

Monitor the stability of testing activity in your organization.

This widget helps you understand how stable the testing activity in your organization is. It illustrates the fluctuation in the stability (total passing test executions as a percentage of overall test executions) of builds over time.

The stability widget is a collection of one or more line charts in which the X-axis represents time and the Y-axis represents the stability. Each line is a different segment that you can configure to compare different projects, builds, users, etc.

Line chart with dates on the X axis and stability percentage on the Y axis

In the sample above, there are two segments: Stability 1 (blue line) and Stability 2 (yellow line). The value of Stability 1 decreases from 66.47% on 16th November to 60.04% on 18th November. This drop in stability indicates that a higher percentage of tests failed on 18th November when compared to 16th November and warrants a deeper audit of the test failures. Similarly, the value of Stability 2 improves from 67.59% to 69.43%, which indicates a slight improvement in stability. You can also see that both graphs improve beyond 22nd November.

Drill down for more information

Test Observability enables you to investigate more contextual information on all dashboard widgets using the drill-down feature.

You can use the drill-down feature in the Stability widget to gather more insights. For example, if you see a drop in stability at any point, you can investigate the reasons for this drop.

Follow these steps to use the drill-down feature:

  1. Hover on any point in the Stability widget and click View breakdown. A project-wise breakdown of the stability metrics for the selected date range opens up in a side pane. View breakdown button next to a point on a graph
  2. Click View tests to get to the tests that contribute to the variance in stability.

View tests link next to a project listed in a separate window

This opens Tests Health in a new tab with the applicable filters. On Tests Health, you can view the individual tests that contribute to the variance in stability and further investigate any fluctuations.

Tests Health window from Stability

Widget configuration - Stability

You can configure the following options in the Stability widget:

  • Widget name: A suitable name to easily identify the purpose of the widget.
    widget name
  • Description: An optional widget description to explain the purpose in detail. A user can view this description by hovering over an info icon on the widget and gain valuable context about the widget.
    widget description
  • Chart Summary: A toggle to show or hide the chart summary, a concise banner that displays summarized information on your stability widget. By default, the stability widget prominently displays Average Stability as the chart summary. You can choose to show or hide this chart summary. Chart summary is available only on widgets with a single segment.
    chart summary
  • Segments: Add up to five segments in the Stability widget using the Add segment option. These segments appear as separate line charts in the widget. Segments should be used along with filters. You can use various filters in each segment to compare different projects, builds, users, etc.
    segments
  • Filter: You can add a filter to include only the data you want in a particular segment. The parameters by which you can filter data are Projects, Unique Build Names, Users, Build Tags, Test Tags, Hooks Visibility, Host Names, Folder names, Device, OS, and Browser.
    filter
    You can also import filters from other widgets to avoid duplicate efforts.

Sample use cases

You can use the stability widget to track and compare the stability of several aspects of your testing organization. Here are a few sample use cases to get you started:

Analyze module-wise and team-wise stability

You can configure separate segments for different modules or teams in your test suites. You can use segments in combination with the following filters to identify modules and teams:

  • Unique build names filter to identify build names that belong to a particular module or team.
  • Users filter to differentiate between team members who triggered the build.
  • Folder names filter to identify modules based on folders in your test repository.
  • Build tags and Test Tags that represent team or module information.

Consider the following example in which the stability of tests in three modules is compared.

Stability widget to compare modules

Here, the three line charts represent Module A (purple line), Module B (blue line), and Module C (yellow line) in a test suite. Such a graph can quickly tell you that Module A has been stable, Module B is improving in stability and Module C is degrading in terms of stability. Using this insight, you can focus on Module C and find out the reasons for the spike in the percentage of test failures using the drill-down feature. In many cases, you will be able to apply best practices followed by top-performing teams to improve the build stability of other teams.

To create the above widget, in the following sample configuration, different Folder names filters are configured on each of the three segments that define Module A, Module B, and Module C.

Configuration of stability widget to compare modules

Analyze platform stability

You can measure the stability across multiple devices, OS, and browser combinations using the stability widget. This can be achieved by configuring separate segments for each of the OS-device-browser combinations that you want to track.

In the following example, the stability of tests run on three different browsers is compared.

Stability widget to compare browsers

Here, the three line charts represent the stability of tests run on Browser A (purple line), Browser B (yellow line), and Browser C (blue line). This graph informs you that the stability of tests run on Browser A varies more than that of Browser B or C. Also, it tells that the tests run on Browser C display better stability than that of other browsers. You can analyze deeper using the drill-down feature. Using these insights you will be able to concentrate on improving the stability of tests run on Browser A and B.

We're sorry to hear that. Please share your feedback so we can do better

Contact our Support team for immediate help while we work on improving our docs.

We're continuously improving our docs. We'd love to know what you liked






Thank you for your valuable feedback

Is this page helping you?

Yes
No

We're sorry to hear that. Please share your feedback so we can do better

Contact our Support team for immediate help while we work on improving our docs.

We're continuously improving our docs. We'd love to know what you liked






Thank you for your valuable feedback!

Talk to an Expert
Download Copy