Part 2 – Introduction to Types of Testing
By Kalpesh Doshi, Director of Product Management at BrowserStack - August 26, 2019
After explaining the basics of the Software Development Process, the second post of this 5-part series focuses on the different types of testing methodologies.
Developers and QA perform different types of testing to ensure that the software they develop is as per the requirements and expectations. Below are some types of testing performed during the entire SDLC process.
This category of tests comprises of system functionality-centric tests. In other words, it tests to make sure the system works as expected.
This is done mainly by developers themselves to test the code they wrote. These tests are not restricted to any specific user scenarios and tend to be highly specific.
Example: A developer has scripted a password input text field with its validation ar (8 characters long, must contain special characters.); makes unit test to test out this one specific text field (has a test that only inputs 7 characters, no special characters, empty field)
This test is performed after gathering pieces of code together written by different developers. Depending on the team structure, this set of tests can either be executed by testers or developers. Integration testing ensures that individual code units/ pieces can work as a whole cohesively.
Example: Similar to the previous example, the tester performs a test and ensures that the password field works well along with other fields on the same page without any errors. The page may contain elements that other developers have coded, which is why this test is crucial.
Testers / QA professionals mainly perform this. It revolves around use cases and functional system requirements. Testers generally have no understanding of the code (also known as Black Box Testing).
Example: A tester writes a test case to ensure that the password is accurately saved in the database after generating one. This ensures that the code written by the developer is integrated with the system’s database.
End to End Testing
These tests are all from a user’s perspective and involve typical workflows that all types of users go through while using the product. Testers / QA professionals run these.
Example: A tester writes test cases to enter usernames and passwords in both valid and invalid combinations and then verify the system’s database to check for logged in session id’s
Exploratory Testing refers to informal testing performed by the testing team. The primary objective of this testing is to explore the bugs existing in the application. During exploratory testing, it is recommended to keep track of the testing flow and past activities performed. An exploratory testing technique doesn’t require documentation and test cases.
Graphical User Interface (GUI) / Visual Regression testing
The objective of GUI testing is to validate the GUI as per the business expectations. Also, the expected GUI is available in the Detailed Design Document. The GUI testing focuses on verifying the size of the buttons, input fields, alignment of text, tables, and other relevant content in the tables. It also validates the menu of the application, after selecting different menu and menu items, it ensures that the page does not fluctuate and the alignment remains the same after hovering the mouse on the menu or sub-menu.
Monkey testing is performed by a tester in a random manner, assuming that if the monkey uses the application, then how a monkey enters random values without any knowledge or understanding of the application. The objective of monkey testing is to verify if an application crashes by giving random input values. There are no special test cases written for monkey testing
Non-Functional / Specialty Testing
This category of the test comprises of tests that aren’t functionality-focused. It involves the environment surrounding the product as well as factors indirectly related to the product.
Load testing is to verify the maximum load a system can handle. It can be in terms of simultaneous users logged in, simultaneous actions per second, and determining the maximum number of concurrent database queries. These tests usually require specialized software and are performed by QA professionals proficient in it.
Example: 5,000 simultaneous system log-ins
This set of tests validates the security of the product. It includes checks on HTTP endpoints and database connections to ensure that only validated users are making such connections. These tests usually require specialized software and are run by QA professionals proficient in it
Example: tester checks to ensure the session has terminated after a logout
This set of tests evaluates the performance of the project and ensures that it meets expected benchmark requirements.
Example 1: The time it takes to log in to the product or to retrieve data from it. These tests usually require specialized software and are performed by QA professionals proficient in it.
Example 2: tester sets the time for the loading of a login function and verifies if it fits within the requirement.
This testing ensures that software applications perform well on different hardware, operating systems, and network environments. A prevalent type of compatibility test is Cross-Browser testing. These tests usually require specialized software and are run by QA professionals proficient in it.
Example: tester starts a virtual machine with a different browser on it and runs a set of functional tests on it.
A Test Suite comprises of all types of test cases as discussed above. The test strategy determines which cases are to be executed and at what time. Often, test suites are extensive (5,000+ test cases). So, other constraints are required to determine which test cases should be executed. These tests are always under the purview of testers / QA professionals.
- Smoke Test: A set of test cases that determine stability, the intention is to ensure there are no show-stopper bugs.
- Sanity Test: A very short/quick set of tests to ensure that the environment (any hardware, network, external systems) is ready before extensive testing.
- Regression Test: A more comprehensive set of test cases that prove out any new release has not broken existing functionality. Interested in running faster Regression Testing? Read this 3 part guide for Faster Regression Testing.
- Acceptance Test: A set of test cases which provides a benchmark by which all stakeholders deem a release.
Manual Testing vs Automated Testing
It is essential to map out which test cases will be manually tested and which parts will be done via Automation Testing. Read this interesting article on manual vs. automated testing for understanding the difference between the two.
In case you missed it, read the first post on Understanding the Basics of the Software Development Process.
Stay tuned for the third part of this series.