App & Browser Testing Made Easy

Give your users a seamless experience by testing on 3000+ real devices and browsers. Don't compromise with emulators and simulators

Get Started free
Home Guide Handling Test Failures in Cypress A Comprehensive Guide

Handling Test Failures in Cypress A Comprehensive Guide

By Hamid Akhtar, Community Contributor -

Cypress is a popular testing framework that offers developers an easy and efficient way to write end-to-end tests. However, even with the best efforts, Cypress fail test scenarios can still occur. It’s essential to learn how to handle these failures effectively to ensure the accuracy and reliability of your tests. In this guide, let us explore some common reasons why Cypress fail test scenarios occur and offer practical solutions to troubleshoot and resolve them. By the end of this guide, you’ll have the tools and knowledge needed to identify and handle Cypress fail test scenarios, enabling you to write more robust and reliable tests for your applications.

Understanding Test Failure in Cypress

When a test is run, it can fail. If this happens, an error message describing the problem will appear. The two most common causes of test failures are as follows:

  • The tested application had a flaw, which is why the test did not generate the anticipated results.
  • The method used to build the test had a defect.

It takes more time and money to maintain automated test cases than it does to automate them, and the test cases are brittle, and failures are common.

Common Causes of Test Failure

The most typical rookie errors with Cypress testing are:

  • Not making commands or assertions retry-able
  • Failing to create page objects
  • Not using specific test selectors
  • Avoiding deterministic testing
  • A surplus of end-to-end testing
  • Failing to conduct the tests on each PR (CI)
  • Lack of expertise in debugging Cypress tests
  • Failing to mock external dependencies
  • Not retrying fragile tests in flaky environments

How to force fail a test in

Whether a missing verification is to blame for your false negative or something else, you now have the option to “Force Fail” and purposefully identify a test as failing.

Here is one method in for forcing a test to fail if a specific condition is met.

If the phrase “Sorry!” appears on your website, Here you want the test to fail.

/// <reference types="Cypress" />

describe("These tests are designed to fail if certain criteria are met.", () => {
beforeEach(() => {

specify("If 'Sorry!' is present on page, FAIL ", () => {

The test currently passes if the word “Sorry!” is found. If this criterion is true, how can you fail the test?

You can just throw a JavaScript Exception to fail the test:

throw new Error("test fails here")

However, in your situation, It is recommended to use the .should(‘not.exist’) assertion instead:

cy.contains("Sorry, something went wrong").should('not.exist')

Debugging Failed Tests in Cypress

Debugging failed tests in Cypress involves using various built-in tools like the Cypress Test Runner, console logs, and browser DevTools to identify and fix issues in your test code. With these tools, you can pinpoint the root cause of test failures and make the necessary adjustments to ensure your tests are accurate and reliable.

Debugging with the Cypress Test Runner

A user interface for executing tests is called test runner. Use the following command:

./node_modules/.bin/cypress open

Debugging with the Cypress Test Runner

Debugging with Chrome DevTools

The Google Chrome browser includes a suite of web developer tools called Chrome DevTools. Despite the fact that Cypress has created a wonderful application to assist you in understanding what is happening in your application and your tests, the incredible effort the browser teams have put into their built-in development tools cannot be surpassed. You can see that Cypress follows the modern ecosystem’s lead once more by choosing to use these technologies whenever available.

Debugging with Chrome DevTools

Debugging using Cypress UI

One of the quickest methods to determine why your tests failed is by using the Cypress UI. It helps to see all of the steps, every step your tests made before the issue occurred. Cypress outputs helpful data to the browser’s console when you click on a step.

Debugging using Cypress UI

For instance, in the failing test above, Cypress writes out the actual array in the console when the assertion step is clicked.

One of the most important and rewarding skills you should master is how to debug the Cypress Fail Test. Cypress offers a simple technique for debugging test scripts. It offers compelling possibilities that you can explore by clicking here.

Screenshots & Videos

When Cypress is used in headless mode, whenever a failure occurs, screenshots and videos are taken automatically. Having a screenshot and a video of your failing test is quite handy when running your tests in continuous integration, therefore, this is quite beneficial as well.


You can log information from your testing in two practical ways. Using cy.log() or console.log() are two options, respectively. Remember that Cypress is essentially JavaScript, thus you may implement all the advantageous debugging techniques you use in JS. In the Cypress Command Log, cy.log() will print a message. Additionally, you may use console.log() to write logs to the browser’s console.

Browser Dev Tools

Since Cypress is a browser-based application, you have complete access to all of the data provided by your browser’s developer tools. This implies that you may troubleshoot your failing Cypress tests using the same methods and tools you use to troubleshoot problems with your application code.

Irrespective of the method you choose to debug your Cypress tests, you must remember that Cypress testing must be executed on real browsers for accurate results.

Start running tests on 30+ versions of the latest browsers across Windows and macOS with BrowserStack. Use instant, hassle-free parallelization to get faster results without compromising on accuracy. Detect bugs before users do by testing software in real user conditions with BrowserStack.

Run Cypress Tests on Real Browsers

Frequently Occurring Test Failures and Solutions

  • Explicit waiting

cy.wait() is a command in Cypress that instructs the test runner to pause execution for a specified period of time. It is typically used to synchronize test code with the application under test or to wait for an element or event to be available before continuing with the test.

The cy.wait() command takes a time parameter, which can be a number (in milliseconds) or a string representing a time interval. For example, cy.wait(5000) would pause execution for 5 seconds, while cy.wait(‘2 seconds’) would pause execution for 2 seconds.

// ❌ incorrect way, don't use


However, this forces us to just wait for the page to load while running your test. Instead, you can leverage the built-in retry ability of Cypress.

cy.get('button', { timeout: 10000 })

This way, you won’t have to wait more than 10 seconds this way for the button to display. However, if the button renders faster, the test will move right on to the next command. This will allow you to gain some time.

  • Unreadable selectors

First-hand information on the behavior of your test can be provided via selectors. This is why it’s critical to make them readable.

Regarding the best selectors to choose from, Cypress has several suggestions. These suggestions’ primary objective is to provide your testing stability. Use of separate data-*’ selectors comes first on the list of suggestions. You should incorporate these into your application.

Unfortunately, access to the program being tested isn’t always available to testers. As a result, choosing elements might be challenging, especially when trying to locate a particular piece. Many people who are in this situation select their elements in a number of ways.

One of these techniques is using XPath. It is challenging to read the syntax of XPath, which is its biggest drawback. You cannot really identify what element you are selecting from your XPath selector alone. Additionally, they don’t actually increase the effectiveness of your Cypress tests in any way. Everything that XPath can do, you can accomplish using Cypress commands, and they’ll make it easier to read.

❌ selecting elements using xpath

// Select an element by text
cy.xpath('//*[text()[contains(.,"My Boards")]]')

cy.xpath('//div[contains(@class, "list")][.//div[contains(@class, "card")]]')
// Filter an element by index
cy.xpath('(//div[contains(@class, "board")])[1]')
// Select an element after a specific element
cy.xpath('//div[contains(@class, "card")][preceding::div[contains(., "milk")]]')

✅ selecting elements using cypress commands
// Select an element by text
cy.contains('h1', 'My Boards')
// Filter an element by index
// Select an element after a specific element
cy.contains('.card', 'milk').next('.card')
  • Ignoring requests in your app

Let’s examine this code example:


A number of requests are fired when you open a page. The frontend app will process and render the responses to these queries on your page. The [data-cy=list] components in this example are rendered when you get a response from the /API/lists endpoint.

The issue with this test, however, is that Cypress is not being instructed to wait for these requests. As a result, even if there are lists present in your application, your test may provide a false positive and pass.

Cypress will not wait for the requests!

In order to define this, you must use the intercept command:

cy.intercept('GET', '/api/lists')
  • Overlooking DOM re-rendering

To retrieve data from the database and then render it in DOM, modern web apps constantly submit requests. You are testing a search bar in the following example, where each keystroke will initiate a new request. With each response, the page’s content will be updated. The goal of this test is to take a search result and verify that the first item with the text “search for critical bugs” will appear when the word “for” is typed. Following is the test code:

cy.realPress(['Meta', 'k'])
.should('contain.text', 'search for known issues')

This test will encounter an “element detached from DOM” issue. The reason for this is that while you are still typing, you will initially get two results, and once you are done, you will only get one result. Just check it!

It’s important to keep in mind that the .should() command will only retry the preceding command, not the entire chain. As a result, cy.get(“[data-cy=result-item]”) is not called again. You can once more add a guarding assertion to code to counteract this issue. This time, it will first ensure that you get the proper number of results before asserting the result’s content.

cy.realPress(['Meta', 'k'])
.should('have.length', 1)
.should('contain.text', 'search for critical bugs')

But what if you cannot assert the number of results? In short, the solution is to use .should() command with a callback, something like this:

cy.realPress(['Meta', 'k'])
.should( items => {
expect(items[0].to.have.text('search for critical bugs')) 
  • Inefficient command chains

In Cypress, the chaining syntax is quite great. Due to the fact that each command passes data to the one that follows it, your test scenario has a one-way flow. However, there is logic to even these commands. Commands issued by Cypress may be parent, child, or dual in nature. Therefore, some of the commands will inevitably begin a new chain.

Consider this command chain:

.type('new board{enter}')
.should('contain', '/board/')

The difficulty of reading such a chain is compounded by the fact that it disregards the parent/child command chaining logic. Every.get() command essentially begins a new chain. As a result, .click().get() chain is illogical. By correctly leveraging chains, your Cypress tests might be more understandable and less unexpected:

cy.get('[data-cy="create-board"]') // parent
.click() // child
cy.get('[data-cy="new-board-input"]') // parent
.type('new board{enter}') // child
cy.location('pathname') // parent
.should('contain', '/board/') // child
  • Overusing UI

You ought to use UI as little as possible when building UI tests. By using this tactic, you may speed up your testing and feel just as confident about your app—if not more—than before. Let’s imagine your navigation bar has links and looks like this:

<a href="/blog">Blog</a>
<a href="/about">About</a>
<a href="/contact">Contact</a>

The test’s objective is to ensure that all of the links contained within the <nav> element lead to active websites. Using the .click() command and then checking the opened page’s location or content to see if the page is live could be the most logical course of action.

The downside of this strategy is that it takes too long and could mislead you. 

You can use the .request() command to verify that the page is live instead of checking your links like this:

cy.get('a').each( link => {
  • Repeating the same set of actions

It’s very frequent to hear that your code should be DRY, or don’t repeat yourself.  Although this is great instruction for your code, it seems that throughout the test run, it is only loosely followed. An example of a cy.login() command that will carry out the login procedure before each test is shown here:

Cypress.Commands.add('login', () => {






The ability to condense this set of actions into a single command is unquestionably useful. It will unquestionably make code more “DRY.” However, if you keep using it in your test, the same set of actions will be carried out continuously during test execution.

You can use Cypress to conjure up a solution to this problem. Using the cy.session() command, this series of steps can be cached and reloaded. The experimental  SessionAndOrigin: true attribute in your cypress.config.js file can be used to enable this, even if it is currently in an experimental stage. The sequence in the custom command can be wrapped using the.session() function as follows:

Cypress.Commands.add('login', () => {

cy.session('login', () => {






This will result in the sequence of your custom commands running once per specification. However, using the cypress-data-session plugin will allow you to cache it for the duration of your entire test run.

There are a lot more things you can do this, but caching your steps is probably the most valuable one, as it can easily shave off a couple of minutes from the whole test run. 

Best Practices for Handling Test Failures

The frequent mistakes made by testers in test automation have a detrimental impact on firms’ return on investment. The good news is that, in most circumstances, these faults are entirely preventable.

Here are some guidelines for using test failure analysis to facilitate smoother releases:

Despite the fact that test coverage cannot be extended to 100% with automation alone, you will see that teams can come quite close by automating the user interface testing.

  • The answer is in explicitly defining the objectives and timetables of each automation endeavor, while also ensuring that all potential areas of automation provide a demonstrable contribution to “quality at speed.”
  • The automation platform you choose should take into account whether your test cases will ever need to migrate between other technologies (like web and desktop apps).
  • It’s critical to guarantee that your suite can be executed in any order because rerunning an entire suite of hundreds of tests might be time-consuming and ineffective if a single test case fails. You can understand how inefficient it is to have to perform a physical examination in order to discover the error.
  • Making sure that each test case is designed to test a single feature is the best way to create test cases. This makes it simple to locate the issue when a test case fails.

Writing Assertions and Expected Results

Whether the specified step of the automated test case was successful or not is determined by assertions, which are the validation processes. Generally speaking, assertions confirm that your test application’s elements, objects, or desired state are in fact in existence.

Here are the benefits of using Cypress assertions,

  • It improves the design’s observability and enables simpler troubleshooting of test failures.
  • Used for formal design verification as well as dynamic simulations.
  • Give input stimuli functional coverage, and verify that a simulated design property is indeed simulated.
  • Assist testers in choosing portions of the code that are certain to produce results that are precise and error-free.

Documenting Test Cases and Steps

In general, documentation is challenging; it needs exacting, thorough labor, and the understanding and appreciation of creating excellent documentation by every team member. Writing documentation is an altruistic effort that is done for the benefit of other developers and the future.

Using retries, timeouts, and recovery strategies

Complex systems are well-tested using end-to-end (E2E) tests. However, some behaviors persist that render testing flaky (i.e., unreliable) and occasionally fail as a result of unpredictably occurring circumstances (eg., temporary outages in external dependencies, random network errors, etc.). The following are some additional typical racial circumstances that could lead to inaccurate tests such as Test server/database availability and Resource dependencies availability.

In order to lessen test flakiness and continuous integration (CI) build errors, Cypress is able to retry failed tests using test retries. So that you can concentrate on what matters most to you, you will be saving your team valuable time and effort. For all commands, the default timeout can be modified. 

Closing Notes

It is advised to execute Cypress tests on actual devices wherever possible so that genuine user conditions can be taken into consideration for better test results accuracy. QAs may access 3000+ device browser combinations with the advent of tools like BrowserStack Automate, which facilitates thorough automated testing.

Run Cypress Tests on Real Browsers


Featured Articles

How to find Broken Links using Cypress

Cypress Best Practices for Test Automation

App & Browser Testing Made Easy

Seamlessly test across 20,000+ real devices with BrowserStack