Why You Shouldn’t Use page.waitForTimeout() in Playwright

Understand why page.waitForTimeout() leads to slow, flaky Playwright tests and what to do instead. Validate your Playwright scripts against real-world conditions on BrowserStack

Get Started free
Why You Shouldn’t Use page.waitForTimeout() in Playwright
Home Guide Why You Shouldn’t Use page.waitForTimeout() in Playwright

Why You Shouldn’t Use page.waitForTimeout() in Playwright

Ever had a Playwright test that works perfectly once, then fails the next run without any changes?

I went through the same inconsistencies and spent hours debugging CI settings, browser versions, and network throttling before I found the real cause.

I had scattered page.waitForTimeout() everywhere, hoping fixed delays would keep things stable.

The problem is that hard waits do not wait for the application to be ready. They slow tests down and still break in real device environments where performance varies.

After switching to event-driven waits, my test runs became faster, repeatable, and far more aligned with real user interactions.

Overview

Why is page.waitForTimeout() not always ideal?

page.waitForTimeout() feels convenient because it simply pauses the test for a fixed duration. However, relying on hard delays introduces more issues than it solves.

Hard waits often create flaky behavior because they are based on assumptions about speed. If the app loads slower than expected, the test fails. If it loads faster, time is unnecessarily wasted. This also leads to longer execution times, especially when delays are sprinkled throughout the suite.

Playwright already provides smarter waits that respond to real app conditions. For example:

  • Locator actions and assertions wait automatically until elements are ready to interact with (for example: click(), fill(), expect().toBeVisible())
  • page.waitForSelector() waits for the element to appear in the DOM
  • page.waitForLoadState() waits for the page to reach a defined readiness state
  • (like load, domcontentloaded, networkidle)
  • page.waitForFunction() checks for any condition that needs to be true before continuing

These alternatives adapt to actual behavior, which keeps tests stable across different browsers, network speeds, and device performance.

When Can page.waitForTimeout() Still Be Used?

It is not completely forbidden, but it should be treated as a last option, reserved for cases like:

  • Debugging: pausing to observe what’s happening in the browser
  • Highly specific timing quirks where no reliable state or selector exists
    (and even then, revisiting the app or test logic is recommended)

In this guide, I will explain why page.waitForTimeout() leads to fragile automation, the rare moments when it may still be useful, and the reliable alternatives that improve Playwright test stability across any environment.

What page.waitForTimeout() Does

page.waitForTimeout() is a simple delay function in Playwright. It pauses the test for a fixed number of milliseconds and then continues execution. It is an easy way to make sure the UI has enough time to update before interacting with it.

The issue is that this delay has no awareness of what is happening in the browser. It does not wait for elements to render, API calls to complete, or animations to finish. It only waits for a predefined duration, whether the application is ready or not.

That simplicity becomes a problem as tests grow and environments become less predictable.

The Risks of Using Fixed Waits in Playwright Tests

When I relied on fixed delays, my tests became unreliable without me noticing at first. Hard waits lead to issues like:

  • Unpredictable Flakiness: A fixed timeout assumes a consistent load speed. If the page takes a little longer than expected, the test breaks. If it loads faster, time is wasted doing nothing. The result is inconsistent, difficult-to-reproduce failures.
  • Slower Execution at Scale: Even a small delay, repeated across dozens of tests, adds minutes to every run. This slows down feedback loops and affects release velocity, especially in CI/CD pipelines.
  • Hidden Issues in Real Conditions: Hard waits mask genuine performance or state problems because the test pauses instead of validating whether the UI is actually ready. This leads to false stability in complex workflows.
  • Higher Maintenance Over Time: As the application evolves, timing assumptions change. Hard waits require constant tweaking, which increases test fragility and maintenance overhead.
  • Not Aligned With User Behavior: Real users do not pause randomly; they wait for visible changes or interactive states. Fixed waits disconnect tests from real user experience.

How Hard Waits Break on Real Environments and Device Conditions

A fixed Playwright timeout code assumes that every system the test runs on behaves exactly like the machine it was written on.

However, modern applications render content through asynchronous processes: network calls, lazy-loaded components, client-side hydration, GPU-driven animations, and background tasks. These events complete at different speeds based on the environment. Hard waits ignore all of this.

Here is how they fail in real usage:

  • Device Performance Variability: Mobile browsers, low-CPU devices, throttled emulators, or older tablets render significantly slower. A timeout calibrated on a high-end laptop becomes too short, causing the test to interact with elements that aren’t ready yet.
  • Network and Backend Fluctuations: APIs may respond slower due to load spikes, caching behavior, or third-party dependencies. A UI element that usually appears in 1s might sometimes appear in 3s, but hard waits won’t adapt.
  • Animation and Transition Timing: Frameworks like React, Vue, and Angular often delay DOM stability due to CSS transitions, hydration, and re-renders. A test might click during a transition, triggering click interception errors.
  • Conditional or Dynamic UI Rendering: Content that appears based on user roles, feature flags, or A/B tests can take different code paths. A fixed wait cannot predict which path will execute.
  • Browser Engine Differences: WebKit, Chromium, and Firefox each schedule rendering and JS execution differently. Hard waits might pass on one but fail on the others due to subtle timing differences.

These timing mismatches cause:

  • False negatives: element not found or not actionable errors
  • False positives: failure symptoms hidden behind delays
  • Bloated execution times: multiplied delays across the suite

These timing failures remain invisible until the test suite runs somewhere slower, busier, or fundamentally different from your local machine. Different browsers, operating systems, hardware profiles, and network conditions introduce new timing behaviors that hard waits simply cannot handle.

This is where BrowserStack helps. It allows you to run Playwright tests in parallel across a wide range of real environments and provides detailed logs, videos, and screenshots so timing issues are easy to detect and fix before they impact users.

Inline Banner Guide

When Using a Fixed Delay Might Be Acceptable and How To Use It Safely

There are rare cases where a hard wait can be helpful, usually when there is no specific UI state or event that can be observed. I still use page.waitForTimeout() sometimes, but only with clear intent.

  • Watching the UI misbehave in real time: A short pause helps me literally see what the browser is doing so I can diagnose flickers, late transitions, or DOM reshuffles before choosing a proper wait strategy.
  • Handling those annoying third-party elements: Ads, cookie consent layers, and analytics prompts love to appear when they want and expose zero useful hooks, so a tiny delay buys the browser a chance to settle.
  • OS-level or system UI moments: Native dialogs for file uploads or permission prompts operate outside the page’s control, and a controlled pause prevents the script from racing ahead too early.
  • Temporary life support during refactors: When the team hasn’t surfaced stable selectors or reliable states yet, a well-marked delay stops the test suite from chaos until the app becomes test-friendly.
  • Validating performance lag on purpose: When testing slow-network flows or progressive rendering, an intentional delay can simulate a real user stuck waiting while the app assembles itself.

Better Alternatives to Fixed Waits in Playwright

I replaced almost every hard wait in my codebase with the following techniques. These approaches make tests faster, more stable, and adaptive to real-world environments.

There are a few patterns that consistently eliminate flakiness:

1. Relying on Built-In Auto-Waits for Interactions

One thing that surprised me when I first moved to Playwright was that I didn’t need to write explicit waits for common UI actions. Playwright already includes auto-waiting as part of every interaction. When I click a button or fill an input, Playwright monitors the DOM and waits for the element to reach an actionable state before executing the step.

That means Playwright automatically checks whether the element:

  • exists in the DOM
  • is visible and within the viewport
  • is not disabled
  • is not covered by another element
  • has finished moving or animating

So instead of this:

await page.waitForTimeout(2000);await page.click(‘#submit-btn’);

I simply write:

await page.getByRole(‘button’, { name: ‘Submit’ }).click();

Playwright waits under the hood until the button is truly ready – whether that takes 200ms or 2 seconds.

2. Waiting for UI State Instead of Time

If I care about something becoming visible, attached to the DOM, or enabled, I directly wait for that condition instead of waiting one second hoping it finishes.

await page.locator(‘#spinner’).waitFor({ state: ‘detached’ });await expect(page.locator(‘#results’)).toBeVisible();

This works especially well after actions that require fetching data or performing validation.

Another example: waiting for the “Save” button to become enabled after field validation completes.

await page.fill(‘#email’, ‘valid@mail.com’);await expect(page.locator(‘#save’)).toBeEnabled();
await page.click(‘#save’);

The test no longer cares if the server responds in 10ms or 3 seconds. It waits only as long as needed.

3. Using Network Events When the UI Depends on Requests

If a button triggers an API call, I wait for the network event instead of hoping the UI finishes in time. This is the most reliable way to validate async flows.

const responsePromise = page.waitForResponse( res => res.url().includes(‘/checkout’) && res.status() === 200
);

await page.click(‘#buyNow’);
await responsePromise;

await expect(page.locator(‘.order-confirm’)).toBeVisible();

This gives me airtight synchronization between fetching data and checking what the UI does next.

4. Using Assertions as Smart Waits

Assertions are naturally retry-driven in Playwright. They keep trying until the condition is true or a timeout occurs, so I use them as synchronization points.

await expect(page.locator(‘#profile’)).toContainText(‘John Doe’);

Instead of passive waiting, the test actively checks the UI until state matches expectations. It is a better version of waiting because it knows why it is waiting.

5. Using waitForSelector() When Working With Dynamic Elements

Sometimes new elements appear only after transitions or route changes. I explicitly wait for the selector only when auto-waits are not involved.

await page.goto(‘/notifications’);await page.waitForSelector(‘.notification-item’);
await expect(page.locator(‘.notification-item’)).toHaveCount(5);

Still no fixed timing. It adapts to fast and slow devices.

6. Combining Multiple Signals for Complex UIs

There are edge cases where UI state, network events, and visual readiness must line up together. Instead of stacking multiple hard waits, I combine async waits in a structured way.

await Promise.all([ page.waitForSelector(‘.chart-loaded’),
page.waitForResponse(res => res.url().includes(‘/data’) && res.ok())
]);
await expect(page.locator(‘.chart’)).toBeVisible();

Whether the environment is blazing fast on a MacBook or sluggish on a cheap Android device, the test adapts.

Validating Dynamic Wait Behavior on Real Devices and Browsers With BrowserStack

Even when my local tests pass, timing behaves differently in production-like environments. Real devices have slower CPUs and different rendering pipelines. Browsers handle animations, script execution, and network scheduling differently. That is where BrowserStack becomes essential.

I run the same Playwright suite across:

  • Older Android devices with limited processing power
  • Safari on iOS where input events behave differently
  • Low-bandwidth networks that delay important UI updates
  • Browsers with distinct rendering and script timing characteristics

Here is an example of executing these tests on BrowserStack’s real device cloud:

// playwright.config.tsimport { devices } from ‘@playwright/test’;
export default {
projects: [
{
name: ‘Chrome on Pixel 7’,
use: {
browserName: ‘chromium’,
…devices[‘Pixel 7’],
browserstack: {
username: process.env.BSTACK_USERNAME,
accessKey: process.env.BSTACK_ACCESS_KEY,
},
},
},
{
name: ‘Safari on iPhone 14’,
use: {
browserName: ‘webkit’,
…devices[‘iPhone 14’],
browserstack: {
username: process.env.BSTACK_USERNAME,
accessKey: process.env.BSTACK_ACCESS_KEY,
},
},
},
],
};

I specifically watch for:

  • Actions that auto-wait correctly on fast systems but break on slower ones
  • Animations or loaders that extend unpredictably in certain browsers
  • Assertions that pass locally but fail under realistic latency

When something does fail, I rely on:

  • Video recordings to show timing differences visibly
  • Network logs to analyze delayed requests or resource throttling
  • Console logs to catch runtime errors missed locally

This helps me uncover timing issues before customers feel them.

Talk to an Expert

Conclusion

Hard waits feel like a quick fix, but they introduce hidden instability into test suites. By replacing page.waitForTimeout() with state-aware synchronization like auto-wait interactions, locator assertions, and network-aware waits, tests adapt to real application behavior.

With features like test insights dashboards, performance profiling for every session, network and geolocation simulation, and integrations with CI systems, BrowserStack shows exactly where tests slow down or break due to poor waiting strategies. That visibility makes it easier to identify which flows still rely on fragile timing and to fix them permanently.

Try BrowserStack Now

Useful Resources for Playwright

Tool Comparisons:

Tags
Automation Testing Real Device Cloud Website Testing

Get answers on our Discord Community

Join our Discord community to connect with others! Get your questions answered and stay informed.

Join Discord Community
Discord