“It was glorious”: I remote usability tested and here’s everything I learnt
Wednesday, August 22, 2018/
This article is for designers, developers, QA, PMs and software folk. To those who have thought about running user tests on their site or app, this article will explain the ins and outs, and pros and cons, of remote usability testing.
During my time at Evernote, I was fortunate to be encouraged to do online usability testing. We used usertesting.com, and after creating hundreds of tests, I learnt how to write practical, useful tests.
Sidenote: I tend to use (and have heard) the expression ‘user testing’. But it seems ‘usability testing’ is the more accepted phrase. User testing seems reserved for the testing of an idea with a set of people; usability testing tests the implementation.
What is remote usability testing?
Remote (or online) usability testing involves someone, somewhere in the world, loading your site or app on to their device. Their job is to follow your set of instructions as they use your site or app. Your job is to work out where and why things fall apart. You’ll see their mouse, or their hand, in the video. You’ll also hear them talk out loud. They might also submit written answers and thoughts at the end.
- It is fast.
- It is repeatable.
- Does not need hardware.
- It’s not free. At the time of writing, usertesting.com costs $49 per test for small numbers of tests.
- It can miss subtle and nuanced feedback in early stages of product development.
- You can’t test your own hardware.
Why, and when, to do usability testing
1. You think you’ve got something good
Things are looking good. The design matches the spec. You’re feeling fine for the ‘big release’. At this point, you have a bunch of hidden assumptions because of your experience, or familiarity, with your product. But when you put it out in the real world, you might discover:
- Many people don’t read an icon, or other visual design, in the same way;
- The navigation, or app purpose, might be misunderstood entirely — ‘oh, I get it, it’s just like Tinder!’; or
- The way you use your device or phone differs from many people — ‘oh, I always force quit apps’.
Story one: Scannable camera capture
Scannable is an app that we developed at Evernote. It uses the iPhone camera to take a picture of a physical paper document and turns it into a neat PDF.
It simply requires you to point your iPhone’s camera in a way where the entire document is visible.
The percentage of testers who struggled with that core action was astounding. They were technically competent. They understood they needed to move the camera (iPhone) so that the piece of paper was in shot. But they didn’t know how to move their hand to accomplish that. There was also a small percentage who vaguely waved their phone at the piece of paper a few inches away, hoping to capture the entire page.
We built some magic into that app — but not that much magic.
As a result, we added extra UI elements to guide users to ‘get the shot’ and to encouraged them to hold still when they did. We also added some padding to the capture area for a more forgiving experience.
Story two: the Skitch ‘record’ button
Skitch is an image annotation app — essentially, you scribble on pictures. When we designed the app, we wanted to leave as much room for the user’s photo as possible. So, we designed the colour picker tool as a single circle swatch of colour. Tapping this swatch would reveal the other colours.
This seemed very intuitive to us — a core assumption. However, usability testing revealed many users interpreted the swatch as a record button. After all, it was a red circle. So, we added a new first-time experience that expanded all the colours.
Incorrect assumptions are everywhere.
2. When it’s not clear which direction is right
Sometimes you don’t know if a particular design will work, or when you want to A/B test. Or, there might be strong opinions from all sides of the product team. Now that I’m designer, developer and PM at teampurr.com, I can have arguments with myself. One way to move on from these disagreements is running some tests.
3. When you want to test in an environment you don’t have
Online usability testing allows you to specify the following variables.
- Do you want to see your site or app run on low-spec Android devices, or iPad Pros, or maybe even PCs with German keyboards?
- What’s the experience like for users in Europe when the servers are in the US?
- Besides the usual demographics, think about testing with the criteria ‘you must be a current user of [app]’.
Test other companies’ stuff!
A valuable, counter-intuitive, approach is to test your competitors’ stuff. Even before you build your own version. Scannable was not the first document scanning app — we tested many existing approaches that existing scanning apps took. Then, with that knowledge, we could choose and build on the real-world winning UI elements.
Don’t forget to test prototypes
These can be as simple as paper prototypes. Draw on some paper, take photographs, and use one of the many options to make them clickable.
Or it could be as complex as a fully-functioning ‘snippet’ of an app built with real code.
Caveats: your tester may get stuck in some yet-to-be-built corner. There are guided options when running online tests that can help.
Benefits of online usability testing
- Organising an in-person usability test is work. Finding people, paying them in some way, and finding that many won’t turn up on the day. Online testing saves you from this.
- Repeatable testing: each tester has the same script.
- You don’t have to watch the results in real-time. Watch at two-times speed! And if you’re not a pro, or don’t have time, you can opt to have an expert write up the results so you never have to watch the video.
- Access to hardware you don’t own. Being indie means you don’t have unlimited resources to buy a drawer of devices.
- A practically unlimited number of user testers.
- Quick turnaround.
This last one is important. Imagine it’s 4pm and you‘ve just got the new ‘join a team’ feature in. You’ve got lots of questions. Does the feature have a good UX? Does it work on multi-monitor setups? With online user testing, you can get the majority of your testing done overnight, when many casual testers work. Them, the very next day, you can move ahead, with all of your questions answered.
How to write tests for online usability testing
So, you’ve decided to test online. Now to write the ‘script’.
A usability script is a list of one-two sentence tasks. It is a listicle of simple and clear instructions for the tester to follow sequentially. In many ways, it’s like computer programming. And like programming, things will go wrong! You’ll want strategies to stop your tester getting stuck in a loop, or wasting too much time on a single process.
Start with top-level tasks.
Let’s say I want to test out Tabby, an app I’m actually working on. It has an account, team and other communication features. Say I want to test three things:
- Signing in;
- Joining a team; and
- Sharing a trophy emoji with your team.
I’ve found I get the most from asking users to try to accomplish something high-level like this. It’s closest to the real world. But then, if they get stuck, you’ll want to walk them through smaller steps. It’s typical for an online user test to take 10-20mins. Getting stuck in an initial stage can mean most of that time is spent flailing around, and this avoids that.
In bold is an example script. My own notes are underneath.
1. Hi, thanks for taking the test. Please make sure you have a Mac running at least Mac 10.12 ‘Sierra’.
Tip: some testers ignore test requirements. Putting this here explicitly prevents them from continuing and wasting your time.
2. Download Tabby from http://teampurr.com.
3. Launch the Tabby app by right-clicking on it and selecting ‘Open’.
Tip: often you’ll be testing something that is difficult to install or access — guide your testers through.
4. Once the app is running, describe what you think the app does in 1 or 2 sentences
Note how I limit the answer duration to avoid five-minute-long answers!
5. Sign in using the following:
Username: [email protected]
This is your test account.
6. Imagine you’re a remote worker, and you want to get a sense of how busy your teammates are.
Tip: keep these instructions simple, and one–two sentences at most. Testers are often slow readers.
7. Imagine your coworker has given you their team-code: FELINE.
8. Try to sign in to the team using the code FELINE. Move to the next step when you’re signed in to the team, or if you get stuck.
Note: start with a high-level to-do. The tester will only see one instruction on the screen at a time, so I repeat the code FELINE. Tell the tester they’re to stay on this instruction, but give them an out if they’re stuck
9. If you’re stuck at this point, look for the icon with overlapping faces. Click this.
10. Now type in your code FELINE and click ‘Join’.
11. Great! Now that you’re signed in, spend 20 seconds describing what you see and what you think each part means.
Note: I like being encouraging.
12. Your next task is to send a trophy emoji to only your teammates, but nobody else. Try doing that now before you move to the next step. If you get stuck, move on.
13. If you’re stuck, describe in a few sentences how you think how you think it should work.
NOTE: Try to understand their mental model.
14. If you’re stuck, look at the list of teams you’re on. Try clicking each one and describe what you think is happening.
15. Enable only your own team and then click on the trophy emoji.
16. [Insert more tests here.]
17. Great! If you’ve still got time left try the following …
Tip: add extra tests here to get your full 20 minutes’ worth from your fastest testers.
18. Thank you so much for your help. It means so much!
A note on sign-up and sign-in considerations
Online testers are usually more than happy to walk through any sign-up process. But once you’re confident with your sign-up process, then it’s common to want to skip as much as possible.
Creating a dummy account with existing data saves valuable time. It also lets you pre-populate the account with typical data or contacts. When doing this, keep in mind that multiple testers will ‘pick up’ your test simultaneously. If you have a single test account, you might get weird results.
One workaround is to simply trigger single tests manually. The other is to create multiple test accounts with dummy data. Make a list in a Google Sheet with the login details. Instruct your tester to go to the Google Sheet, select the next available account, and mark it as ‘used’. In my experience, paid remote testers will happily switch contexts.
Checklist for your own tests
- Have I run the test with a single person first? You’ll often find ‘bugs’ like broken links and instructions that lead down a dead end.
- Did I ask for general impressions?
- Does the test start at a high-level first with fallback steps for every major task?
- Are there ‘bonus’ tasks at the end?
How many tests? Five.
You only need to run five tests. After five tests, you’ll have enough to work on for the next sprint.
If you only take one thing away from this article: usability testing needs surprisingly few tests to be practically useful. And more tests don’t tend to show anything new.
Analysing the results
You’ll cry, cringe, yell and shout at your screen. Eventually, acceptance. Here’s what I try to take away from each.
- Did they user complete the test? Did they pass or fail?
- Was the user able to complete the test without ‘helper’ steps?
- What was the average time to complete the test? Track this over many product versions.
- For each test, you’ll likely see heaps of small issues. For example, a weird flicker as an asset loads when it should have been pre-loaded, or wodd layouts on less-popular device screens. Turn those into tickets.
- Share 15-second clips with other stakeholders/developers/QA. Any longer and I have found they tune out.
- Remote testing is fast and efficient.
- Write a robust script that gets testers back on track if they get stuck.
- Run one test first. If all goes well, run the extra four. Five is enough.
I love online user testing. It offers fast feedback and is very controlled. It’s great for remote work. I hope this article encourages you to try it.
|Passionate about the state of Australian small business? Join the Smarts Collective and be a part of the conversation.|
From the frontlines
Five reasons AI is better at making business decisions than you Anthony Aarons Epifini co-founder
'Few are destined to be unicorns': When is the right time to sell your startup? Peter Forbes HROnboard founder
Forget gender quotas: It's time to review your definition of diversity Inga Latham SiteMinder chief product officer
How to assemble a board of directors that will make, not break, your startup Mark Rohald Cluey Learning co-founder
From disrupted to disrupter: What I learnt moving from corporate to startup Tim Shepherd CIMET director
Imagine the worst-case scenario for a startup founder. It happened to me Sam Jockel ParentTV founder