Testing on Virtual Machines (VM): Move to Real Device Testing
Shreya Bose, Technical Content Writer at BrowserStack - February 1, 2021
What is Virtual Machine Testing?
Virtual Machines (VMs), often emulators and simulators, are essentially software that emulates devices (other than the physical device it is running on) so that users (devs or testers) can use it to check how a specific software works on a particular device.
Virtual Machine Testing lets QAs emulate different devices with unique OS’s on a single physical device. With sufficient resources, it is possible (though time confusing and tedious) to create a virtual lab with multiple virtual machines.
Is Virtual Machine Testing enough?
In a word, no.
Let’s start by exploring how an emulator works.
Emulators mimic a specific device’s hardware and software on a tester’s workstation. Android Emulator (by Android Developer Studio) is commonly used for this purpose.
Now, both desktop and mobile devices work on an ISA – Instruction Set Architecture. ISA comprises a set of instructions written in machine-language decipherable by the device processor. ISA differs based on the processor or processor families it is communicating with.
The emulator replicates the processor of the target device. Subsequently, it translates the ISA of the target device into one understood by the processor of the tester’s physical device (the one the emulator is running on). This process is called binary translation.
Ideally, a binary translation should create VMs that offer near-native capabilities, including the target device’s physical sensor, battery power, location, etc. The reality, however, tends to be different.
Emulators entail considerable performance overhead under binary translation. If the target device uses the same ISA as the tester’s workstation, no binary translation is required. But the ISAs tend not to match since commercially sold mobile devices run on ARM (Advanced RISC Machines) architecture. Computers (the tester’s workstation here) usually run on Intel x86. Their ISAs are fundamentally different.
It is possible to facilitate binary translation via hardware acceleration, but the latter is relatively complicated to implement, even for experienced devs and QAs. The effort expended is also not worth the result, especially when it is far easier and faster to test the app on real devices.
Now, on to simulators.
The purpose of a simulator is to allow testers/devs to run software not meant for their workstation’s OS. For example, think of an iPhone or iPad simulator working in XCode.
An iOS simulator runs over the physical device’s OS, mimics iOS, and runs an app within its ecosystem. Testers interact with the software and the emulator via a screen window that resembles an iPhone/iPad.
While iOS simulators are faster than Android emulators because the former does not need binary translation to work, they still have serious limitations. Simulators cannot replicate battery power or interruptions in software function because of incoming calls, messages, etc. Additionally, the iOS simulator doesn’t run on any platform for macOS because it needs the massive framework library Cocoa API to handle basics like GUI and runtime.
If the tester isn’t using macOS, porting Cocoa to their platform is fairly effort-intensive. Once again, it is pointless to expend so much time and effort when one can simple access real iOS and macOS devices on the cloud and get accurate test results every time.
Virtual Machines also have a set of more general shortcomings:
- The VM will always be less powerful than the physical device it runs on. It will be slower, have worse graphics, less RAM, and much less storage.
- The VM will not be on the same network as the workstation. It can be set up to share files over LAN by installing special software, but the setup tends to be unstable.
- VMs access hardware indirectly because they run software on top of the host OS. They have to request hardware access from the host, which slows down usability.
- If multiple VMs are running on a single host (as they will in parallel testing), their performance will be set back if the device isn’t sufficiently powerful. Every VM runs on the host machine’s resources; thus, the VM’s performance depends on the nature of the host machine.
- Inadequacies and defects of the host machine can infect the VMs running on it.
Naturally, with the weaknesses detailed above, VM testing is bound to provide inconsistent, inconclusive, and unreliable results. VM testing is simply not sufficient for rolling out market-ready apps in a digital market where the slightest error or inconvenience in user experience can lead to app uninstallation.
The Alternative: Real Device Testing
The only way to accurately monitor software performance is to run tests on real devices, mobile and desktop, depending on the software under test. No test run on VMs can offer conclusive results, including compatibility and performance tests.
Take the following example. A test run on an iOS simulator has returned a positive result. Taking the result as conclusive, a developer continues to write code for an entire feature – say, voice recognition. If the app does not undergo real device testing before being pushed to prod, bugs will escape QA teams and disrupt user functionality and experience.
One can certainly set up an on-premise device lab. But, given the diversity of mobile and desktop devices used across the world, it will take serious investment – both financial and human effort – to set up a lab that can offer sufficient device coverage for global or even regional software.
An easier and more cost-effective solution is to use a real device cloud like the one provided by BrowserStack.
BrowserStack provides cloud-based access to a vast repository of real devices. These devices range across multiple manufacturers, models and versions. The device centers are frequently updated with the latest devices, so testers can monitor software on devices customers are most likely to use.
Below are a few unique features of the BrowserStack cloud:
- The interface is handy for testing responsive design. Users can test layouts and designs on 2000+ device-browser combinations. With a single click, test how a website appears on multiple screen sizes and resolutions. Generate screenshots on every device, thus recording software performance on multiple endpoints.
- When testing code on internal and private servers, use the Local Testing feature. The BrowserStack cloud provides support for firewalls, proxies, and Active Directory. It establishes a secure connection between a developer’s machine and BrowserStack servers. Once Local Testing is initiated, all URLs work out of the box, including those with HTTPS, multiple domains, and those behind a proxy or firewall.
- Automated Selenium testing is easy on BrowserStack’s cloud Selenium grid of 2000+ browsers and real devices. The grid facilitates parallel testing, speeding up their builds, resulting in faster releases. With pre-built integrations across over 20+ programming languages and frameworks, Automate fits easily into existing CI/CD workflows by providing plugins for all major CI/CD platforms.
- Visual testing is also easy to execute on the BrowserStack cloud via Percy by BrowserStack. It captures screenshots, compares them against the baseline images, and highlights visual changes. With increased visual coverage, teams can deploy code changes with confidence with every commit. With Percy, testers can increase visual coverage across the entire UI and eliminate the risk of shipping visual bugs.
Using an online virtual machine for testing in 2021 is not adequate to ensure optimal software quality and maintain a competitive advantage in a dog-eat-dog online market. Real device testing is the only way to accomplish this. With the BrowserStack real device cloud, testers can do this with minimal hassle and roll out top-shelf software faster than ever before.