You can read BrowserStack's privacy policy here.

Manual testing may take a few days. Automated testing can take a few hours. But to ship quality releases multiple times a day, you need reliable feedback in minutes.

That’s why the world’s best teams rely on continuous testing and parallelization. It enables them to run days' worth of tests in less than an hour, speeding up their releases by 10x or more. All it takes is one test suite structured for parallel execution and infrastructure stable enough to handle the job.

In this webinar, David Burns, core contributor to Selenium and co-editor of the W3C WebDriver specification explains how to get started with parallelization for test automation.

Along the way, David was asked questions about the driver instantiation, parallel execution, frameworks, and best practices. Here's a roundup of his answers:

What are the parameters we need to consider in the initial stages of parallel testing?

The main consideration to keep in mind is that each test can run on its own. After this, it is a good idea to start these tests from a known place, perform one small task, and return to the original state—and follow the tests all the way.

Should parallel execution be considered from the beginning of test development or later, once we have some tests in place?

Parallel execution should be considered at the beginning, in the middle, and at the end of every test development cycle. It is an ongoing practice that needs to be maintained every so often. I like the idea of, wherever possible, running tests in a sort of 'chaos' mode, where you run them in random order.

What's your recommendation for driver instantiation and quitting when running in parallel?

This changes case-by-case. In the past, I have instantiated a browser before each test and also done it at the beginning of each class. You need to balance the cost of starting up a browser and the time it takes to clean a browser, so you don't bleed through into tests that follow.

When testing in parallel against a web application, how do you ensure the different test instances do not compete against one another? How can I set up my test infrastructure to avoid this sort of collision?

Wherever possible, try not to use real databases. Try in-memory databases for testing with UI. The main thing you need to do is to limit the interactions with the disk or other services. For example, with Django, run your tests with SQLite database instead of MySQL, with multiple instances of your webserver that tests can use.

Is parallel execution dependent on the automation framework? Which framework works best with Java?

In most cases, yes. You need to have a test runner that has the ability to spread the load once it works out what needs running. For Java, I recommend using jUnit, but I know there are others who recommend TestNG.

What is the best way to run parallel tests with a mocked API?

Running tests in parallel with a mocked API should work exactly the same way as running tests in parallel. If this doesn't work, it might be worth checking if you created a new instantiation of the mock in a test instead of reusing a previous reference.

What's the best way to handle reporting with test parallelization?

Good test runners tend to have good reporting when tests are run in parallel because they are simply exporting xUnit XML formats which all reporting tools read and render.