Coming up with a device testing strategy

On my current project I’m in charge of both manual testing the app on iOS and Android as well as maintaining two separate automation suites for iOS and Android (when I joined the team I had to overhaul each suite as the app was rebuilt).

In this blog post, I will share my device testing strategy for testing new features (manual testing) and what guides it:

What must be supported?

Our client has communicated with us their expectation around what OS versions and devices must be supported. In our project, “support” means, the app is expected to render well and work on the OS versions and devices stated. We will check these OS versions and devices.

If we do not support something, it does not mean we prevent a user from installing the app on their device - it’s just we don’t guarantee it will work.

How does this affect my device testing strategy?

Given the fact my time is very limited (as I also have two automation suites to maintain), I only focus on what we do support and don’t make any attempts to look at older devices etc to see how things render there. We have also made sure to communicate this to the client.

I don’t check every little thing on every single device we support - as I don’t have time. But I do make some assumptions.

Assumptions:

Coming up with a priority order

Based on discussions with my team and the device requirements that we have from our client, we also communicate the priority order for device testing. This means, if we run out of time for testing, then we can ensure that higher priority devices (i.e. devices with higher risk) are covered.

If possible, I think it’s also good to get access to device usage of your customers (so you can target your testing on devices that are most used by customers - this information would be very valuable input).

Getting input from the developers

Our team is full of experienced developers who are very familiar with their domain (be that Android or iOS). I like to get testing ideas and a better understanding of where risks can lie for various devices from them. Any input from the developers doesn’t dictate how I will perform my testing, but I do want to make informed decisions when it comes to device testing - their input enables that.

Communicating my device testing strategy:

Documenting on confluence and writing on my testing notes for features what devices I performed my tests on and which tests were performed.

Next steps

If time and budget allowed I would probably look into setting the test automation suites to run on a service that ran the tests on various devices. At the moment I’ve only got the Espresso tests in the CI/CD and am in the process of setting up the XCTests in the CI/CD (they are currently being run locally).

Photo by Obi Onyeador on Unsplash