Skip to main content
🚀 See how 20+ AI Agents can make your QA faster and smarter. Join 10k+ QAs at global leadership summit! Register Now
No Result Found
Connect & Get help from fellow developers on our Discord community. Ask the CommunityAsk the Community

Errors tab

Analyze HTTP and protocol errors to isolate failing flows and prioritize fixes.

You use the Errors tab during and after a test run to pinpoint failing requests, see which user flows are most impacted, and drill into first occurrence timings. Start at the distribution to gauge overall severity, then move into flow-level detail to decide the next debugging step.

Errors tab showing distribution, highest error percentage chart, and breakdown table

Errors distribution by code

This section aggregates every failed request by HTTP status or protocol response code. It shows:

  • Total errors: The sum of all failed requests (e.g., 30,505). Use this to quickly judge test health. A rapid climb often signals misconfiguration or an outage.
  • Bar list of codes: Each bar represents a code (404, 500, 503, etc.) with a numeric count. A single dominant code usually means a structural issue (missing endpoint, bad path, auth failure) rather than random instability.

How to use it:

  1. Scan for unexpected codes (for example, large 404 counts mean your script might be requesting a removed resource).
  2. Prioritize fixing the highest volume code before lower-frequency issues; reducing one widespread root cause typically resolves many symptoms.
  3. Correlate the spike timestamp with your Logs or Requests tab to find exact request examples.

Edge cases:

  • If you see only 4xx codes, validate test data (payloads, IDs) and authentication headers.
  • If you see mixed 5xx codes, investigate backend service stability (infrastructure saturation, downstream dependency failures).
  • If counts look inflated relative to virtual users, confirm your script is not retrying aggressively.

Highest error percentage

This bar chart ranks labels (user flows or thread groups) by the percentage of their requests that resulted in errors. High percentage does not always equal highest raw count, so use it to catch smaller flows that are completely broken.

Displayed metrics:

  • Label / Thread group mode: Toggle between logical flow labels and underlying thread groups to isolate whether the script grouping or the infrastructure segment causes errors.
  • Percentage bars: Show error rate (errors / total requests for that label). A 100% bar means the flow never succeeded.

How to act:

  1. Sort visually: Identify 100% or near-100% flows first—they tend to be configuration issues (wrong endpoint, missing auth) rather than performance bottlenecks.
  2. Compare relative percentages: A flow with 20% errors in a checkout process is more urgent than a utility call with 5% errors.
  3. Switch to thread group view if all labels look healthy but overall errors remain high; this may reveal an isolated misconfigured group.

Error breakdown

The breakdown table lists each label with its first occurrence timestamp plus total error count, making it easy to trace when a failure began.

Columns:

  • Label: The scripted flow name (for example, Add to Cart Flow).
  • Response body (when captured): Helps validate whether the error is an HTML not found page, JSON error object, or empty response.
  • Description: A summarized interpretation (for example, Not Found – Resource does not exist) to save time parsing raw payloads.
  • First occurrence: Timestamp (test-relative) when the first error for that label appeared; align this with deployment events or load increases.
  • Error count: Total number of failed requests for the label.

Filters and controls

The Filters control (top-right) lets you narrow error data by time range and region.

Usage tips:

  • Adjust the time range to focus on the window when errors first spiked; compare before and after a deployment.
  • Select a specific region to see if failures are localized (CDN, routing, regional infrastructure). Switch between regions to confirm scope.
  • Clear filters before exporting so the shared link reflects full test context.

Troubleshooting guidance

Follow these steps to move from symptom to root cause:

  1. Identify dominant error code in distribution panel.
  2. Locate flows with highest percentage to see if issue is localized or global.
  3. Open breakdown row; inspect first occurrence timestamp and cross-check application logs.
  4. Drill into individual requests (via Requests tab) to confirm request path, headers, and payload.
  5. Re-run a small subset (using a reduced virtual user test) after applying a fix to validate improvement.

Common scenarios:

  • Large 404 counts: Verify URL paths in your script; ensure environment base URL matches deployment stage.
  • Sudden 500 spike after ramp-up: Backend capacity limits—scale service or add caching.
  • High percentage errors only in one region: Possible CDN edge misconfiguration or geo-based routing issue.
  • Persistent 401/403: Token expiry or missing scope—refresh credentials and retest.

Best practices

  • Keep scripts resilient: Add assertions to fail fast on unexpected redirect chains or blank responses.
  • Version your test data: Outdated IDs often produce cascading 404s.
  • Tag thread groups with environment or feature flags so you can filter quickly during incidents.
  • Export or snapshot the errors view immediately after a test for audit and regression tracking.
  • Combine error analysis with latency trends; performance degradation often precedes certain classes of 5xx errors.

See also:

  • Requests tab – Inspect individual request/response details.
  • Logs tab – Correlate application or script logs with error spikes.
  • Summary tab – Overall performance metrics to contextualize error impact.

Next steps

After prioritizing and fixing high-volume errors, re-run the test focusing on previously failing flows. Target an error rate approaching zero for critical transactional paths (checkout, authentication) before scaling user counts.

If you continue to see unexplained errors, contact BrowserStack Support with the test run link and timestamps for deeper investigation.

We're sorry to hear that. Please share your feedback so we can do better

Contact our Support team for immediate help while we work on improving our docs.

We're continuously improving our docs. We'd love to know what you liked





Thank you for your valuable feedback

Is this page helping you?

Yes
No

We're sorry to hear that. Please share your feedback so we can do better

Contact our Support team for immediate help while we work on improving our docs.

We're continuously improving our docs. We'd love to know what you liked





Thank you for your valuable feedback!

Talk to an Expert
Download Copy Check Circle