Skip to main content
No Result Found

Test Case ID Tagging

Learn how to tag test cases in your automation script with existing IDs in Test Management.

Use Test Case ID tagging to associate test cases in your automation script with their corresponding unique IDs in Test Management. This ensures accurate mapping of test results, regardless of how test cases are organized or executed.

Why use Test Case ID tagging?

  • Prevent test case duplication
    By tagging test cases in your automation script with their corresponding unique IDs in Test Management, you prevent the creation of duplicate test cases.
  • Understand automation coverage
    When test cases are tagged, you gain better visibility into which test cases have automated counterparts. This helps you track the overall growth of your test automation efforts.

There are two methods for embedding test case IDs into your automated tests to suit your workflow:

  • Property-based ID Tagging
    This method is particularly useful when you want to link test results from JUnit/BDD JSON files with Test Management.

  • Title-based ID Tagging
    This method is particularly useful when your test case title contains ID, and you use either the SDK or Test Observability.

Prerequisites

  • An existing Test Management account.
  • A project with defined test cases in Test Management.

Property-based Test Case ID tagging

To ensure accurate results, you should assign a custom property (e.g., <test-case-id>) within your generated test automation script. Associate this custom property with the relevant Test Case ID in Test Management. When you upload the test automation script, Test Management will utilize the ID assigned to the property to match and update the results of the corresponding test case.

Use the following syntax within your test script:

Framework Assign
TestNG Reporter.log("[[PROPERTY|id=<test-case-id>]]\n", true);
PyTest record_property("id", "<test-case-id>")
Playwright console.log(`[[PROPERTY|id=${id}]]`);
  • Replace <test-case-id> or {id} with the actual ID of the corresponding test case in your Test Management.
  • Property-based ID tagging works with any framework. Include the property named id and assign the corresponding test case ID as its value.

Following are the sample code snippets to add property-based test case ID in the test suites.

package com.browserstack;

import org.testng.Assert;
import org.testng.Reporter;
import org.testng.annotations.Test;

public class BStackDemoTest {
  @Test
  public void testUpdateUserEmail() {
    Reporter.log("[[PROPERTY|id=TC-1]]\n", true);
    // [..]
  }

  @Test
  public void addToCart() {
    Reporter.log("[[PROPERTY|id=TC-2]]\n", true);
    // [..]
  }
} 
import pytest

def test_div_zero_exception(record_property):
    """
    pytest.raises can assert that exceptions are raised (catching them)
    """
    with pytest.raises(ZeroDivisionError):
        record_property("id", "TC-52628")
        x = 1 / 0

def test_keyerror_details(record_property):
    """
    The raised exception can be referenced, and further inspected (or asserted)
    """
    record_property("id", "TC-52629")
    my_map = {"foo": "bar"}

    with pytest.raises(KeyError) as ke:
        baz = my_map["baz"]

    # Our KeyError should reference the missing key, "baz"
    assert "baz" in str(ke)

def test_approximate_matches(record_property):
    """
    pytest.approx can be used to assert "approximate" numerical equality
    (compare to "assertAlmostEqual" in unittest.TestCase)
    """
    record_property("id", "TC-52630")
    assert 0.1 + 0.2 == pytest.approx(0.3)
// Import the necessary Playwright testing library components
const { test, expect } = require('@playwright/test');

test.describe('Playwright Demo Tests', () => {
  test('testUpdateUserEmail', async ({ page }) => {
    console.log(`[[PROPERTY|id=TC-1]]`); // Add log test case ID

    // Your test code here, e.g., navigating to a page, interacting with elements, etc.
    await page.goto('https://example.com');
    // Assertions to verify test case, e.g.:
    // await expect(page).toHaveURL('https://example.com/updated');
  });

  test('addToCart', async ({ page }) => {
    console.log(`[[PROPERTY|id=TC-2]]`); // Add log test case ID

    // Your test code for adding an item to the cart
    await page.goto('https://example.com');
    // Additional actions and assertions
  });
}); 

Execute test suite

Run your test suite using one of the following.

  1. CI/CD pipeline using Azure DevOps or Jenkins.
  2. CLI.

View results

After successful test execution, Test Management automatically imports and associates the results, offering a centralized view of all test runs.

Reports

Following are the sample test run reports.

<?xml version="1.0" encoding="UTF-8"?>
<!-- Generated by org.testng.reporters.JUnitReportReporter -->
<testsuite hostname="Sh*****" failures="0" tests="2" name="com.browserstack.BStackDemoTest" time="0.006" errors="0" timestamp="2023-04-20T10:46:28 IST" skipped="0">
   <testcase classname="com.browserstack.BStackDemoTest" name="testUpdateUserEmail" time="0.000" />
   <system-out><![CDATA[[[PROPERTY|id=TC-1]]]]></system-out>
   <testcase classname="com.browserstack.BStackDemoTest" name="addToCart" time="0.006" />
   <system-out><![CDATA[[[PROPERTY|id=TC-2]]]]></system-out>
</testsuite>
<!-- com.browserstack.BStackDemoTest -->
<?xml version="1.0" encoding="UTF-8"?>
<testsuites>
   <testsuite name="pytest" errors="0" failures="0" skipped="0" tests="7" time="0.017" timestamp="2023-04-18T18:10:07.535667" hostname="Sh*****">
      <testcase classname="00_empty_test" name="test_empty" time="0.000" />
      <testcase classname="01_basic_test" name="test_example" time="0.000" />
      <testcase classname="02_special_assertions_test" name="test_div_zero_exception" time="0.000">
         <properties>
            <property name="id" value="TC-52628" />
         </properties>
      </testcase>
      <testcase classname="02_special_assertions_test" name="test_keyerror_details" time="0.000">
         <properties>
            <property name="id" value="TC-52629" />
         </properties>
      </testcase>
      <testcase classname="02_special_assertions_test" name="test_approximate_matches" time="0.000">
         <properties>
            <property name="id" value="TC-52630" />
         </properties>
      </testcase>
      <testcase classname="03_simple_fixture_test" name="test_with_local_fixture" time="0.000" />
      <testcase classname="03_simple_fixture_test" name="test_with_global_fixture" time="0.000" />
   </testsuite>
</testsuites>
<?xml version="1.0" encoding="UTF-8"?>
<testsuite name="Playwright Demo Tests" tests="2">
  <testcase name="testUpdateUserEmail" classname="Playwright Demo Tests" time="2.345">
    <system-out>
      <![CDATA[[[PROPERTY|id=TC-1]]]>
    </system-out>
  </testcase>
  <testcase name="addToCart" classname="Playwright Demo Tests" time="1.567">
    <system-out>
      <![CDATA[[[PROPERTY|id=TC-2]]]>
    </system-out>
  </testcase>
</testsuite>

Title-based Test Case ID tagging

Use title-based test case ID tagging to incorporate unique, descriptive test case IDs from your test management system directly within your test scripts. After successful test execution, the system will use the title and Test Case ID combination to locate and update the correct test case, regardless of its file system location.

Key Benefits

  • Flexibility with Folder Structures
    Eliminates the issue of duplicate test cases arising from differences in folder organization between your test scripts and Test Management. Results are reliably updated to the correct test case regardless of its file system location.

  • Streamline Test Result Updates
    Automates the process of updating test results.

  • Simple Implementation
    Add the test case ID after the test title in your script.

How does it work?

Title-based test case ID tagging works through the following steps:

Title and ID Inclusion
Ensure each test case title includes an ID within your automated test scripts.

Test Execution and Result Mapping
When you execute your test scripts using SDK, Test Observability, or JUnit-XML reports, Test Management reads the embedded title and test case ID to locate and update the correct test case within your Test Management.

Example:
Consider a scenario where your test case resides in a different folder within your Test Management system compared to its local file organization. With title-based test case ID tagging, the platform accurately identifies the test case using its ID, bypassing the folder structure mismatch.

  • You can add the test case ID either before or after the title. It is recommended that you place the Test Case ID after the title in your test scripts.
  • Title-based ID tagging is compatible with all frameworks. Include your test case ID in the test title to use this feature.

Following are the sample code snippets to add title-based test case ID in the test suites.

// Import the necessary Playwright testing library components
const { test, expect } = require('@playwright/test');

test.describe('Playwright Demo Tests', () => {
  test('testUpdateUserEmail TC-110044', async ({ page }) => {

    // Your test code here, e.g., navigating to a page, interacting with elements, etc.
    await page.goto('https://example.com');
    // Assertions to verify test case, e.g.:
    // await expect(page).toHaveURL('https://example.com/updated');
  });

  test('addToCart TC-17538', async ({ page }) => {

    // Your test code for adding an item to the cart
    await page.goto('https://example.com');
    // Additional actions and assertions
  });
}); 
import { it } from 'mocha';
describe('Cypress Demo Tests', () => {
  it('testUpdateUserEmail TC-110044', () => {
    // Your test code here
    cy.visit('https://example.com');
    // Assertions
    // cy.url().should('include', '/updated');
  });
  it('addToCart TC-17538', () => {
    // Your test code
    cy.visit('https://example.com');
    // Additional actions and assertions
  });
});

View results

After successful test execution, Test Management automatically imports and associates the results, offering a centralized view of all test runs.

Reports

Following are the sample test run reports.

<testsuites id="" name="" tests="2" failures="1" skipped="0" errors="0" time="9.853">
  <testsuite name="Playwright Demo Tests" timestamp="1688042504919" hostname="" tests="2" failures="1" skipped="0" time="9.286" errors="0">
    <testcase name="testUpdateUserEmail TC-110044" classname="Playwright Demo Tests" time="6.827">
    </testcase>
    <testcase name="addToCart TC-17538" classname="Playwright Demo Tests" time="2.459">
      <failure message="addToCart TC-17538" type="FAILURE">
        Error: expect(received).toEqual(expected) // deep equality
        Expected: "StackDemos"
        Received: "StackDemo"
        // Error details here
      </failure>
    </testcase>
  </testsuite>
</testsuites>
<?xml version="1.0" encoding="UTF-8"?>
<testsuites name="Mocha Tests" time="26.088" tests="5" failures="3">
  <testsuite name="Root Suite" timestamp="2023-06-29T14:58:33" tests="0" file="cypress/e2e/first-test.cy.js" time="0.000" failures="0">
  </testsuite>
  <testsuite name="Cypress Demo Tests" timestamp="2023-06-29T14:58:33" tests="2" time="23.182" failures="1">
    <testcase name="testUpdateUserEmail TC-110044" time="2.142" classname="Cypress Demo Tests">
    </testcase>
    <testcase name="addToCart TC-17538" time="0.000" classname="Cypress Demo Tests">
      <failure message="addToCart TC-17538" type="AssertionError">
        <![CDATA[AssertionError: Timed out retrying after 4000ms: Expected to find element: `#logo`, but never found it.
    at Context.eval (webpack:///./cypress/e2e/first-test.cy.js:6:22)]]>
      </failure>
    </testcase>
  </testsuite>
</testsuites>

We're sorry to hear that. Please share your feedback so we can do better

Contact our Support team for immediate help while we work on improving our docs.

We're continuously improving our docs. We'd love to know what you liked





Thank you for your valuable feedback

Is this page helping you?

Yes
No

We're sorry to hear that. Please share your feedback so we can do better

Contact our Support team for immediate help while we work on improving our docs.

We're continuously improving our docs. We'd love to know what you liked





Thank you for your valuable feedback!

Talk to an Expert
Download Copy Check Circle