Visual Testing: A Beginner’s Guide
Shreya Bose, Technical Content Writer at BrowserStack - July 29, 2020
What is Visual Testing?
Take a moment to imagine something.
A new insurance company website has been created – with the intention of improving user experience by letting customers file claims online. Since paper-based systems tend to be inefficient and costly, the online method also aims to attract new customers and lower expenses.
Now, all automated tests show that the site is working perfectly, but once it actually goes live, things don’t work out so well. Customers tried to file a claim by entering all necessary data, but couldn’t find a Submit button when using a mobile browser. When investigated, it is found that the button is covered up by a text box because of the mobile device’s reduced screen space. This was not detected in the testing phase, and then became a significant hurdle in user experience.
Visual Testing, sometimes called visual UI testing, is the answer to such problems. It verifies that the software user interface (UI) appears correctly to all users. Essentially, visual tests check that each element on a web page appears in the right shape, size, and position. It also checks that these elements appear and function perfectly on a variety of devices and browsers. In other words, visual testing factors in how multiple environments, screen sizes, OSes, and other variables will affect software.
How Visual Testing Works?
Visual tests generate, analyze, and compare browser snapshots to detect if any pixels have changed. These pixel differences are called visual diffs (sometimes called perceptual diffs, pdiffs, CSS diffs, UI diffs).
For visual testing, QAs need –
- A test runner to write and run tests
- A browser automation framework to replicate user actions
Developers create code that replicates user functions, (typing text into a field, for example). At appropriate points in the code, they place commands to capture a screenshot. The first time the test code is executed, an initial set of screenshots is recorded – to act as a baseline against which all further changes will be compared.
After setting the baseline, the QA runs the test code in the background. Anytime a change is identified, a screenshot is captured. Then each screenshot is compared to the baseline image corresponding to that particular section of the code and the software. If differences occur between the images, the test is considered failed.
Once the test code runs completely, a report is generated automatically. A QA reviews all the images that have been diagnosed as changed from their baseline. Some testing tools actually generate reports that highlight the differences between baseline and final images, as detected after the test execution.
If these image differences are caused by bugs, developers can fix them and rerun the test to check if the fixes actually worked. If differences are caused by subsequent changes in the UI, developers will have to review the screenshot and update baseline images against which visual tests can be run in the future.
Visual testing vs Functional Testing
Most software tests are designed to verify that various aspects of software function are working as they are meant to. However, these tests don’t usually cover visual aspects of the software, which are extremely important from the users’ perspective.
In the example at the beginning of the article, functional tests would have shown that the website was working perfectly, including the Submit button. But in the real world, the users were facing a major issue and couldn’t execute one of the main functions the website was actually built for.
Visual testing fills gaps left by functional testing with regard to the visual side of things. In tandem with functional tests, it ensures that not only does the software work as intended, but it also looks visually perfect as well.
Visual Testing vs. Manual QA
Obviously, every website can be checked for visual discrepancies by manual testers. But given that modern websites often contain hundreds of web pages and thousands of web elements, the time, manpower, and effort required to cover the entire website would be astronomical.
As mentioned before, all visual testing reports have to be verified by manual testers. Supplementing manual QA with visual tests helps make the process more efficient and sustain the visual integrity of software.
Visual Testing Tools
Finally, one must have a tool for managing the testing process. Percy by BrowserStack is one of the best-known tools for automating visual testing. It captures screenshots, compares them against the baseline images, and highlights visual changes. With increased visual coverage, teams can deploy code changes with confidence with every commit.
With Percy, testers can increase visual coverage across the entire UI and eliminate the risk of shipping visual bugs. They can avoid false positives and get quick, deterministic results with reliable regression testing. They can release software faster with DOM snapshotting and advanced parallelization capabilities designed to execute complex test suites at scale.
Percy’s SDKs enable users to easily install them and add snapshots. Percy also integrates with CI/CD pipelines and source code management systems to add visual tests to each code commit. After tests are initiated, Percy grabs screenshots, identifies visual changes, and informs the testing team of all changes. Then, testers can easily review visual diffs to decide if the changes are legitimate or require fixing.
By incorporating visual testing into the testing process, testers will be able to deliver visually perfect applications quickly and consistently. It also helps them create websites that maintain visual consistency across the entire UI by offering comprehensive visual reviews and accurate results. Looks do matter, especially when users have thousands of other websites to choose from. Make your site look good and work well, and you will keep your visitors happy.