Prompt guide
Write prompts that produce better results from Test Companion. This page covers prompt structure, context strategies, and tested examples for every capability.
The clearer your prompt, the better the output. Test Companion runs in auto mode by default, which means the words you choose and the context you attach decide which capability handles your task. A vague prompt routes unpredictably and produces shallow results. A specific prompt routes correctly and produces precise results.
Prompt structure
A strong prompt has three parts.
The task states what you want Test Companion to do. Start with an action verb.
The scope names the area, feature, or file the task applies to.
The constraints state any specific requirements, frameworks, formats, or boundaries that must be respected.
Weak prompt:
Test my app.
Strong prompt:
Test the checkout flow on localhost:3000/checkout. Verify that adding, removing, and
updating cart quantities works correctly. Check edge cases like zero quantity and
negative values.
The weak prompt forces Test Companion to guess what “app” means, which pages to visit, and what to look for. The strong prompt gives a clear task, a defined scope, and specific constraints.
Context strategies
Adding context alongside your prompt improves accuracy. You can combine multiple context types in a single message.
Open the context menu
Click the + icon or type @ in the chat to open the context menu. Both surface the same options.
| Context | Use it for |
|---|---|
| Add File | Attach a PRD, spec, screenshot, test plan, or any reference document. |
| Add Folder | Give Test Companion access to a directory of source or test files. |
| Git Commits | Reference specific commits to scope work to a code change. |
| Terminal | Pass recent terminal output, including stack traces and command results. |
| Problems | Pass the current IDE Problems panel contents (linter and compiler errors). |
| Paste URL to fetch contents | Point Test Companion at a live page or web resource. |
Paste text directly
Type or paste requirements, error messages, API responses, or any text directly into the chat. This is fastest when the information is already on your clipboard or comes from a system Test Companion cannot access (for example, an external CI/CD log).
Choose context by capability
Different capabilities benefit from different context. Use the guide below to pick the right inputs.
| If you are running… | Attach… |
|---|---|
| Generate Test Cases | A requirements document or PRD using Add File, or a live URL using Paste URL. |
| Dev Testing | The changed source files using Add Folder or @, plus relevant commits using Git Commits. |
| Write Automation Code | An existing test case file or framework reference using Add File. |
| Fix Flaky or Failing Tests | The failing test file using @, plus the error output using Terminal. |
| Fix Accessibility Issues | A live URL using Paste URL, plus the relevant component files using @. |
Examples by capability
These examples show how to structure prompts for each built-in capability. Use them as templates and adapt the scope, constraints, and context to match your specific testing needs.
Generate test cases
From a requirement document:
Generate test cases for the attached PRD. Focus on the payment processing flow.
Include positive, negative, and edge case scenarios.
From a live website:
Explore example.com/products and generate test cases. Focus on search, filtering,
sorting, and pagination. Check for empty state handling.
Targeted scope:
Generate test cases for the email validation logic described in @src/validators/email.ts.
Cover valid formats, invalid formats, Unicode characters, and maximum length.
Dev testing
Focused validation:
Test my latest changes on localhost:3000. I modified the user profile form to add
phone number validation. Check that valid and invalid phone numbers are handled correctly.
Git-aware validation:
Validate the changes in my last 2 git commits on localhost:8080. Focus on the areas I modified.
Edge case testing:
Test the price range filter on localhost:3000/products. Check what happens when min
equals max, when both are zero, and when max is less than min.
Automate tests
From a prompt:
Write a Playwright test that logs in to localhost:3000/login with username "admin"
and password "admin123", navigates to the settings page, and verifies the account
email is displayed.
From existing tests:
Convert the selected test cases to Cypress tests. Use the Page Object Model pattern.
Store page objects in tests/pages/.
API testing:
Generate API tests for POST /api/v1/users at localhost:3000. Test successful creation
with valid data, validation errors for missing fields, and duplicate email handling.
Use the Jest framework.
Fix failed tests
With error details:
Fix this failure:
TimeoutError: locator.click: Timeout 30000ms exceeded.
Waiting for selector '[data-testid="submit-btn"]'
The test file is @tests/e2e/checkout.spec.ts and the component is @src/components/Checkout.tsx.
With build context:
My CI build failed with 3 test errors. All are related to the updated API response format.
The new response wraps data in a `results` array instead of returning it at the top level.
Update the test assertions.
Fix accessibility violations
Full scan:
Scan localhost:3000 for all accessibility violations. Fix everything that is Critical or Serious.
Targeted scan:
Check the signup form at localhost:3000/register for accessibility issues.
Focus on form labels, error messages, and keyboard navigation.
Post-fix verification:
Re-scan localhost:3000/checkout. I applied your previous fixes and want to confirm
all issues are resolved.
Invoke custom agents
The examples above use free-form prompts that auto mode routes to a built-in capability. To invoke a custom agent you have created, type / followed by the agent name in the chat. A picker shows the agents available in the current scope. Select one and press Enter.
A slash invocation can carry the same context as a free-form prompt. Attach files, folders, git commits, or terminal output with + or @ before sending.
Run an agent with no extra context:
/pre-merge-check
Run an agent with extra context:
/deploy-staging @scripts/deploy-staging.sh
Run an agent with an inline override:
/regression-suite Skip the visual diff step this time and run the rest as usual.
Custom agents follow the same prompt principles as free-form prompts. The agent’s instructions provide the task structure. The context you attach provides the scope. Inline text after the slash command provides the constraints for that run.
Common prompt patterns
The patterns below recur across capabilities. Use them as a starting frame when you are stuck on phrasing.
| Pattern | Example |
|---|---|
| Do X on Y | Generate test cases for the login page on example.com |
| Fix this [error] | Fix this timeout error in @tests/checkout.spec.ts |
| Convert these to [format] | Convert the selected manual tests to Playwright scripts |
| Scan [target] for [issues] | Scan localhost:3000 for color contrast violations |
| Verify that [condition] | Verify that the cart total updates when I change item quantity on localhost:3000/cart |
Next steps
- Web testing: Review all built-in capabilities and how auto mode routes between them.
- Agents: Create reusable, slash-invoked playbooks for tasks you run more than once.
- Configuration and preferences: Adjust interaction modes, approval controls, and other extension settings.
We're sorry to hear that. Please share your feedback so we can do better
Contact our Support team for immediate help while we work on improving our docs.
We're continuously improving our docs. We'd love to know what you liked
We're sorry to hear that. Please share your feedback so we can do better
Contact our Support team for immediate help while we work on improving our docs.
We're continuously improving our docs. We'd love to know what you liked
Thank you for your valuable feedback!