Mobile app development and testing is a tricky matter. The highly dynamic mobile app market that’s home to a diverse range of devices, operating systems and screen sizes makes mobile app testing even harder. How do you determine that your testing efforts are enough? Is your strategy really the ideal one?

In this webinar, Simon Berner, a Test Automation Engineer at House of Test and a Git instructor, talks about the things that dev and QA teams need to be mindful of while testing mobile apps. He also discusses the best tools and ideas to make your strategy more effective, and more.

Along the way, Simon was asked questions about the best ways to approach mobile app testing. Here's a roundup of his answers to questions that we couldn’t cover during the webinar:

When writing tests for iOS and Android versions of the same app that have similar functionality, do people use one test script for both or have one test script for each?

If you have to write tests for an app that shall run on iOS and Android, I strongly recommend that you build your automation framework in a way that supports running the same test script on both platforms. With that, it is much easier to create, maintain and expand your tests.

How do you test when creating a global website (international UX/localization)?

In a lot of cases, localization can be tested by testing the translation texts which are usually hosted in corresponding translation files or a separate system/service. The challenge, though, is to test what the UI will look like when translated to different languages.

Which tools would you recommend for both black-box and white-box testing?

This depends on what, why and how you want to test. For mobile app testing, you want to cover your frontend (the app itself) and also the backend services with some tests. For black-box testing of an app, I highly recommend making use of a debugging proxy like Charles Proxy. Further, for UI automation, you can use Appium, XCUITest and Espresso.

For black-box testing of backend services, I can recommend using Postman for API testing. For white-box testing (automation) on iOS, you could, for example, use XCUITest, and on Android Espresso and JUnit5.

Can you recommend some good resources for writing good test cases?

If your goal is to document your exploratory testing endeavors, then asking the following questions might help you to progress:
- Why do you need to write/document tests?
- Who wants to have them, and why in a written form?
- Who will gain value from it?
- Who is going to maintain them?
- Where do you want to put them to make them accessible for others?
- Can session-based testing help? (see Session-Based Test Management )

If your goal is to write some automated tests, this post is still relevant and has some good hints - 10 Rules for Writing Automated Tests - DevOps.com

When you set out to begin a project or a feature, how much time do you budget for testing? If you expect a feature will take 10 hours to develop, what percent of that time do you set aside for testing?

Estimating how long it takes to test “something” (in one go) in the context of software development is hard. Better try to slice your testing efforts into smaller pieces by defining your first time-boxed test session. Depending on what you are able to cover through testing and how many issues you find in that session, you are better able to predict how long it may take to finish your testing efforts. But even then, keep in mind that you are never really done with testing. It's the question of what is the risk the team is willing to take (and/or the budget limitations) to stop testing.

Need more information about the ‘Productive Preview App’ and how is it different from an internal test app or beta testing?

In our case, the Productive Preview App is a full-fledged version/copy of our Productive Main App. Both of them are available on the app stores. What is different though, is that in the preview app we add some shiny new features first and let the end-users use/test them in production. Once we have collected enough feedback from the end-users, we see if we have to rework them or not. Later the decision is made whether or not will we definitely release them in our Productive Main App. With that, we kind of follow something similar like a canary release strategy.

How many on-premise devices do you have, and more precisely, how many devices of each type (e.g. Google Pixel 3, OS ver. 10) do you have? How is the allocation of these devices managed?

This site may answer your questions - Devices tiers for Automate and App Automate.

How do you run mobile app tests on a CI setup?

My current stack for running mobile app testing through CI consists of these tools - Jenkins, Maven, Java, Appium, and BrowserStack.

What kind of automation test should we run nightly, and what's the success rate that we should aim for?

That depends on the context of your project and the risks of what not to check through (regression) are you willing to take. A good start is to create some sort of a smoke test set that covers the most critical scenarios of your app. The success rate, ideally, should be “all green” for all of them. 😉

Could you recommend some best practices for load testing and performance testing?

Have a look here - Performance Testing vs Load Testing vs Stress Testing

Are there any specific QA communities that you are a part of, Simon, and could you recommend some?

I can highly recommend supporting the Ministry of Testing community by joining their conferences, courses and buying a pro membership. The folks there and the community itself is very inclusive and supportive!