Automate tests
Learn how to convert your mobile test cases into automated test scripts using Test Companion.
Test Companion generates executable mobile automation scripts based on your test cases, natural language descriptions, or by observing your app on a real device. The AI writes code in your selected framework and language, utilizing actual device UI hierarchy and screenshot data to choose reliable locators, resulting in scripts that are ready to run, and are not just skeletal code.
How it works
When you request Test Companion to write automation code for your mobile app, the following process takes place:
- A real device session begins: Test Companion launches your app on a BrowserStack device (iOS or Android, depending on your app file).
- The AI explores the relevant screens: It captures the UI hierarchy (an XML tree of elements including accessibility IDs, resource IDs, class names, text labels, and position data) and takes screenshots for visual context.
- The AI writes the code: Using the captured data, it generates automation scripts with accurate locators, appropriate wait strategies, and assertions. The code adheres to the framework and language conventions defined in your System Instructions and Rules.
- The scripts are available in your IDE: The generated files are created in your workspace, following your project’s folder structure. You can review, edit, and execute them immediately.
Supported frameworks
Test Companion generates mobile automation code for any framework you choose. By selecting a specific framework, you ensure that the generated scripts are compatible with your project’s architecture and testing requirements.
To set your preference, define the framework in your System Instructions or include it in your prompt. You can also open your project in a workspace that already has the framework environment initialized.
Generate automation code
You can generate automation code from Test Companion in three ways:
- From existing test cases: If you already have test cases (either generated by Test Companion or created by your team), you can ask the AI to convert them into executable automation scripts.
- From a natural language description: Describe what you want to test in plain language, without formal test cases. The AI will interpret your description, explore the app to gather locator data, and generate the automation code.
- Click the Automate button on test cases in the Test Companion interface: If you have test cases displayed in Test Companion’s test case management tab, you can click the Automate button on individual test cases. The AI reads the test case details and generates corresponding scripts.
From existing test cases
If you already have test cases (either generated by Test Companion or created by your team), you can ask the AI to convert them into executable automation scripts.
Example Prompt:
Write Appium automation scripts in Python for the following test cases:
1. Verify a successful login with valid credentials.
2. Verify the error message displayed when logging in with an incorrect password.
3. Verify that the "Forgot Password" link navigates to the password reset screen.
Use Pytest as the test runner and place the files in tests/mobile/.
From a natural language description
You can also describe the testing requirements in plain language, without formal test cases. The AI will interpret your description, explore the app to gather locator data, and generate the automation code.
Example prompt:
Write automation tests for the complete user registration flow in my Android app. Use Java with Appium and TestNG. The flow includes:
- Entering a name, email, and password.
- Accepting the terms and conditions.
- Tapping the Register button.
- Verifying that the home screen appears.
Include negative tests for invalid email formats and passwords that are too short.
Click the Automate action on a test case
If you have test cases displayed in the Test Companion’s test case management tab, you can click the Automate button on individual test cases or select multiple test cases to automate them in bulk. The AI will read the test case details and generate corresponding scripts.
How the AI Selects Locators
One of the most challenging aspects of mobile automation is selecting locators that are both accurate and resilient to UI changes. Test Companion employs a tiered locator strategy:
- Accessibility ID: The preferred locator, stable across builds and supported on both iOS and Android.
- Resource ID (Android) or test identifier (iOS): Used when accessibility IDs are not available.
- Text Content: Used for buttons, labels, and links when ID-based locators are absent.
- XPath: Utilized as a last resort. The AI generates the most specific XPath possible and adds a comment explaining why an ID-based locator was not available.
The AI also incorporates visual context from screenshots to validate that the chosen locator actually points to the correct element. If a locator is ambiguous (for example, multiple elements sharing the same text), the AI adds additional conditions or switches to a more specific strategy.
What the generated code includes
The AI does not produce bare-bones skeleton code. A typical generated test file includes:
- Page Object Classes: If your System Instructions or Rules specify the Page Object pattern, the AI creates separate page object files for each screen, including locator constants and action methods.
- Test Methods: Each test case is transformed into a test method with proper setup, action steps, assertions, and teardown.
- Wait Strategies: Explicit waits for element visibility, clickability, or presence. The AI avoids using hard-coded sleep calls unless absolutely necessary (and will provide a comment explaining why).
- Assertions: Meaningful assertions with descriptive messages that facilitate diagnosing failures.
- Capability Configuration: The desired capabilities block for connecting to BrowserStack or a local Appium server, pre-configured with your device targets.
- Test Data: Constants or data providers for input values, separating test data from test logic.
Run the generated tests
After the AI generates your test scripts, you have several options for running them. See the Run Tests page for detailed instructions on each approach.
Best Practices for Better Automation Code
Configure your System Instructions. The more the AI knows about your project (framework, language, folder structure, naming conventions), the better the generated code will align with your codebase. Refer to Chat Settings for details.
Use Rules for coding standards. If you have specific patterns you want to enforce (for example, “always use the Page Object pattern,” “never use Thread.sleep(),” “locators must use data-testid attributes”), define them as Rules. The AI will apply them automatically.
We're sorry to hear that. Please share your feedback so we can do better
Contact our Support team for immediate help while we work on improving our docs.
We're continuously improving our docs. We'd love to know what you liked
We're sorry to hear that. Please share your feedback so we can do better
Contact our Support team for immediate help while we work on improving our docs.
We're continuously improving our docs. We'd love to know what you liked
Thank you for your valuable feedback!