Scenario Driven User Testing for Early Adopters
What is it?
Scenario-driven user testing uses context to examine the ability of a user (or potential user) of a system to use what we have designed. This is particularly useful for Research and Development contexts when the target user may not exist yet. In fact, the market for the product may not exist yet either. Scenario Driven User Testing requires breaking the rules of UX just a little by requiring participants that are close enough to use a little imagination.
How does it work? We invent a scenario with a context of use. For example: “You are purchasing stationery for an entire corporate marketing department for 4 months over financial new year, using our app, order this with a budget of $300.” Often in data software we use something like “You are in charge of getting data to form the evidence of a new initiative. Find how many people are currently on [condition] and model how they might be affected.” This task might include stitching different pieces of data together, or even finding that the database itself has unreasonably poor usability making too many onerous data tasks and unfeasible high expectations of our potential user. This gives us an indication of the areas that we need to address.
What if I want everyone to use my product?
The most common reason for new businesses to fail is that no-one wants what they are offering. How much time, money and energy would you like to sink into something that nobody thinks is for them and won’t therefore pay for it?
Designing for everyone is often said to be designing for no one. But even if your eventual market is in fact everyone in the world, of every age in every country, you still need to define who your first users will be, according to your go-to-market strategy.
Different people in different contexts behave differently, so your user tests won’t tell you anything about whether you’re closer to making your product desirable to a target market/user segment that is willing to pay if you have not designed with them in mind (hence: user centred design). Not only that, you’re not focusing on making something valuable to someone and may be pulled in multiple directions you cannot cover well. BBQ restaurants do not design with raw vegetarians in mind, so their options are fairly scarce for that possible customer. Hence, your EspressoFindr app shouldn’t be out trying to reach customer segments that prefer herbal tea unless that is the core target user you’re after and you are willing to stake your whole product on it.
With an approach that prioritises a target market/user segment in mind, we are giving our early adopters what they need to be our champions. We absolutely need early adopters, they are the people that will invest in our future product with their time, money, enthusiasm and PATIENCE whilst we iron out the wrinkles. If they like what we’ve got to start with, it’s likely that they will enjoy how it builds and maybe request we build something specifically for them as a custom build. Start small and grow.
It is also important to consider the needs of members of our target group with cognitive, motor, auditory or visual impairments that are covered in accessibility – and quite often there are advantages to choosing users with accessibility needs as an early market. This is what designing for everybody should mean more often.
Ok, take me back to that scenario driven thing.
Gladly. So we need to have a scenario that’s actually applicable to what we’re up to. Here are some widely accepted principles, updated for our purposes, from the Nielsen Norman Group, a leading user experience research and consulting firm:
1. Make the task realistic
This means it cannot exceed an unreasonable time period, nor can it be too difficult. We don’t want our users to rage quit the exercise. If we’re making them do menial tasks with large margins for error repeatedly, we need to think what we’re doing and stop breaking the 5th Usability Heuristic.
Often in new products, we can have unreasonable expectations of users, especially if this is a very new or novel concept for them or if the person we think might use this doesn’t exist yet as it’s own profession (often referred to as “seeding new industries”).
2. Make the task actionable
Ask users to do an action. Quite simple. E.g. Maybe we’re a grocery store of some kind with a novel data approach targeting parents, therefore our scenario is “You are a parent of 2 young children with a busy job, buy enough groceries for 2 weeks” Leave them alone, and see if they can figure it out, record all observed mistakes.
It’s not recommended to ask a potential user how might you do this unless they begin to describe another process or something similar to their role that they do or they know is done. In the case above – do they trust Supermarket with no toilet paper or do they go to a different website, maybe a Facebook group even, to validate the claims about the shortage? This is vital in the case of using data as there’s still a variety of processes that are not always considered baseline e.g. what technology they use to analyse it and how to deal with strange outcomes that could be a result of handling null values badly or poorly maintained metadata from 40 years ago. This could be a topic of discussion on a forums/blogs/resources on the internet and are therefore form part of the customer journey.
3. Avoid Giving Clues and Describing the Steps
Do not give clues as to how the user interface works. That’s what we’re here to test. If they can’t do it or don’t understand the technology we’re proposing, that’s our fault and we need to fix it and make it clearer. Sometimes I will demonstrate how the interface is meant to work for extra feedback, but only if they really cannot do the task. If our technology isn’t as reliable as it should be for them to make a judgement, this is also valuable feedback. At no point should someone step in and say “if you just did this…” especially if the task is something like the following “Make an insurance rate scoring model using the following interface for frequent overseas travellers” and the answer is “I don’t know what this algorithm means” – re evaluate if this is the right user or maybe there isn’t one and then ask how they might usually know what the algorithms do. The answer sometimes is their colleague is maybe a recently graduated PhD student educating the whole team about machine learning out of the goodness of their heart. Stranger things have happened!
In summary: if the scenario is
- Too complex
- Too distant from what the user knows
- Too time consuming
- Demonstrates too little value (I.e. why should I pay for it)
You should reconsider the value proposition and how your system can be or should be a viable product.
When is Scenario Driven User Testing Useful?
I have done scenario driven testing when I need to both validate that our first users can use our first product to a good enough standard but also the concept makes sense to them. Chiefly – it is valuable enough for them to want it and they actually see the point and value behind it as opposed to “Yeah, I like it” or “It’s nice, colours are good”. That is not the feedback we are looking for (also a great reason not to make a prototype too pretty, no distractions). Also “Liking” something doesn’t mean anyone will become a future user or customer.
You will need to have:
- Decided what to test
- Who to test with
- What was in scope of this test and what’s out
Only work with what you can test with. The goal here is to get something usable, demonstrable, and just robust enough. For example, what can be in scope is whether our system is easily explained by our prototype. However, this isn’t typically academic research, and should be highly specific to what we need to achieve to move forward to get customers. Basically, our prototype isn’t going to be able to determine explainable AI, as an example.
What do you ask them?
This is a pretty good time to establish a background of who the person is. Early adopters are not really a homogenous bunch, particularly in the data technology sector. In my experience they may be a collection of former academics or super serious autodidacts.
Here’s an example set for a professional:
- Tell us about your job
- Who do you interact with regularly in your job? Why?
- What are typical tasks in your job?
- How are your tasks evaluated? Who is involved?
- What is the biggest challenge for you in your job?
This can also give you ways to make the scenario a bit more relatable should it be a bit unfamiliar.
Common Problems
“I’m not sure my product is suitable for Scenario Driven User Testing?”
If your product is solving a problem then it most likely is suitable for Scenario Driven User Testing. The only borderline exception I know of in my professional practice is user testing the development of essentially a new programming language with a highly contextual user case in mind.
“My user tester is not my target user – what do I do?!”
This has a high probability of happening and could’ve happened for a few reasons
- You didn’t define your target user well enough. Stop and screen the others you might have scheduled.
- You designed too broadly (see above about designing for everyone) it’s not relevant for your target user or target scenario. If it’s more than 1 then regroup and rethink the approach.
- Your target user might not exist, to salvage the interview – hypothetically ask them in their best judgement how they might use whatever you’ve created and how and check in whether there is baseline usability to be tested with.
- You don’t have enough data on who the target market is to determine whether this person is an outlier or is typical. See what you can salvage and determine whether this is a one-off or not. Some participants don’t read the selection criteria, it happens.
“My user tester sent me sketches of what their ideal UI/chart is like – should I use them?”
If you want to evaluate the ideas they’re proposing with your user group, then sure, thanks for the free labour. I have had particularly data scientists start sketching out diagrams because they think I’ve misrepresented something and later used them. I do not take it personally, I am just thankful someone left whiteboard markers in the conference room we used to facilitate discussion. They’re outside your development team so their perspective isn’t clouded at all, use it if you think it’s valuable, don’t if there are problems.
“My user interface bombed. What do I do now?”
You’re not a standup comic, bombing is only positive here, now we know what our users don’t want so we can focus on what they do want. Generally you should follow Jakub Nielsen’s advice and conserve your resources and not test 20 people on an interface at once to iterate quickly and propose new ideas. Break the users you have for interview into groups of 3 or 5, test then reiterate and refocus. Sometimes this requires more than one interface design to compare and contrast. This was myself and Belinda Yee’s plan when we began using several versions of the Protari user interface to determine how data interpretations might best checked for user preference through a modified version of A/B testing, to assess preferences for different visualisation displays, and interview. Kill your darling outliers and move on.
Good luck!