Refine your test cases with Iterative prompting
Use iterative prompting to refine AI-generated test cases in a conversational workflow, allowing you to select, modify, add, or reject test cases over multiple rounds for a tailored final output.
Generating the perfect test cases from a single prompt or document can be difficult. You often get results that are close to perfect, but you still have to manually edit them or start over with a brand new prompt.
Iterative prompting solves this. It allows you to have a conversation with the AI. You can now provide follow-up prompts to modify, reformat, or add to the test cases the AI just generated.
How to refine your test cases
The process is a simple loop:
- Generate
- Review
- Refine.
After you generate your first set of test cases from a document, your test case list becomes interactive. You now have new options to guide the AI for the next prompt.
Step 1: Generate your first test cases
Follow this documentation to generate your test cases. Provide your PRD, image, or other requirements to the Test Case Generator agent.
Step 2: Review and categorize your results
This is the most important step. Before you write your next prompt, you must tell the AI what you think of its work. For each generated test case, you can:
- Select test cases (for modification): Click the checkbox next to the test cases you want to change. Your next prompt will only apply to these selected test cases.
- Add: Click the Add icon for test cases that you want to retain. The AI saves these, protect them from future changes, and avoid creating duplicates of them later.
- Reject: Click the Reject icon for test cases that are irrelevant or wrong. The AI learns from this and will not generate similar, irrelevant test cases again.
Tests that you do not select, accept, or reject will be carried over unchanged to the next iteration.

Step 3: Write your refinement prompt
- You can only use one integration (like Jira, Confluence, or Azure) in a single generation journey. After you generate test cases from a source, you cannot add more from that same source or switch to another integration within that same cycle.
- You can add as many as 5 files to provide context to the AI.
Now, write a new prompt in the Refine and Add Test Cases field (see annotation 1). The AI will read this prompt along with the context of which test cases you have selected, accepted, and rejected.

You can add new files (like a Confluence page or Jira ticket) or just provide a simple text command.
Click Generate icon again (see annotation 2). The AI will process your new request and update the test case list. You can repeat this review and refine loop as many times as you need.
Examples of what you can do with iterative prompting
Here are common use cases for your follow-up prompts.
1. To modify specific tests
This is for when you like the idea of a test but want to change how it is written.
- Select the test cases you want to modify.
Example prompts:
- Change the tonality of the selected test cases to be more formal.
- Add edge cases for invalid inputs to the selected ‘Login’ test.
2. To add new test cases
This is for when the first generation was good but missed some functional areas.
- Accept the test cases you like. Do not select anything.
Example prompts:
- Now generate new test cases for the ‘Forgot Password’ flow.
- Add negative test cases for the payment page.
- Now add accessibility-focused test cases for the dashboard.
3. To add more context
This is for when you need to add new information to the AI’s memory.
- Accept the test cases you like.
-
The Prompt:
- Attach a new file, Jira ticket, or Confluence link and write a prompt.
- Using the attached Jira ticket, add new test cases for the mobile platform.
We're sorry to hear that. Please share your feedback so we can do better
Contact our Support team for immediate help while we work on improving our docs.
We're continuously improving our docs. We'd love to know what you liked
We're sorry to hear that. Please share your feedback so we can do better
Contact our Support team for immediate help while we work on improving our docs.
We're continuously improving our docs. We'd love to know what you liked
Thank you for your valuable feedback!