Monthly Archives: February 2015

Reading all the Signs

I remember feeling like my first semiotics class was eye opening.  I had never considered that there could be an order to language or that that there was a science to understanding this order.  Now, all this is a bit of an aside, but I bring it up because there is a parallel with usability testing.  There is both an order to how people act and then a tandem act in which evaluators observe to make sense of what people do.

Video helps this latter act considerably.  Without it, the evaluator will need to scribble notes and inevitably miss things.  With the video, the tester has the audio, including all the textual responses,  the gesture of the mouse, and the facial expressions.  All of these tools are helpful in assessing usability.

The key is for the tester to create a framework where users feel comfortable testing the site, and sharing their ideas.  Once that framework is in place, then one will find very useful information.  But, without it, the user won’t feel comfortable sharing.  A script helps the tester to be assured that they are saying the same thing each time.  But, this script also helps the tester feel ready to put their user at ease.

Once that is done, one has the long task of many sense of the data.  Often wading through all the information is almost as much fun as generating the data.  Interpretation of evaluation data is the process of bringing into order disorder through noticing patterns.  Once the patterns are clear, a good tester then develops a scheme to make sure that these patterns are obvious to anyone who reads the deliverable.

Advertisements

Listening and Hearing

Talking is my occupation. Teaching is in a manner of speaking about talking and talking and talking.  Or, I should say that teaching is about attempting to communicate an idea in multiple ways.  Some of those ways are about your voice, others are about the hearing the voice of others, and sometimes its about reiterating their voice.

In this week, I have found my voice increasing muted by laryngitis, and it has made me think a little about the role of voice in my work, both in teaching but also in evaluation.  it almost seems as if you might not need a voice at all in order to allow your participants to share theirs.  But, really, evaluation and testing aren’t really about just listening, they are about sharing, framing, and positioning as well.  In honor the time spent by participants, one must create a situation that sets up the participant’s experience.

It isn’t just about the words that one says, but also about the tone of voice, the pacing of the things that are said, and even the inherent emotion in the phrases that are said. The evaluator or user experience tester is not unlike a hostess, setting up everything to put their guest at ease.  In a situation that is carefully organized, the participant is then able to share their ideas.

Quantitative and Qualitative Data

Testing and scholarly research are sort of similar. You have a problem, and you want to understand why that problem is occurring, for example.  Both use quantitative and qualitative data. But, in research, you want to be conclusive, exhaustive, and categorical.

In testing, you just want to make the problem better.  So, in that way, in testing, you don’t choose all the ways of understanding the problem, but a few methods.  The key is to make sure to choose methods that actually help you assess the problem accurately.  Success rate, for example, can help you assess if people are accomplishing a particular task. But, what if your goal is that users employ much of your site, then you want to measure how many pages they are viewing.

There is a useful diagram on the Neilsen Norman website that illustrates the ways that particular testing tools relate to behavioral or attitudinal data.  The article also illustrates what issues can be best illustrated by quantitative data, for example, like how much or how often.

Quantitative data should likely be paired with qualitative data.  After all, if you know that most of the people going to your app stop at a certain point, you don’t know why.  It might be because it is so terribly boring, or because it is so terrible interesting.  Or, it could be that the text becomes illegible.  Or…well, it could be anything.  So, pairing the quantitative data, often found in analysis, with qualitative data give you the information you need to understand the problem.

To go back to my original statement, testing help you know enough to fix a particular app or website.  You can make the situation better for the user.  Quantitative and qualitative data are the tools that you use to make these improvement decisions.  But, in terms of scholarship, you would likely need to have many, many more point of feedback to make a categorical assessment.  So, while you might be able to use a small study to fix a particular mobile app, this doesn’t necessarily help you make broad generalizations about all mobile apps.

Tasks, Tasks, Tasks

You might have a problem and a desire to solve that problem but where do you go next. Imagine being in a situation where your museum app is opened regularly but then no other features are accessed, as assessed through analytics.  You know that you need to figure out why this is happening.  What is your first step?

User testing, such as task analysis, can help you understand where challenges are going.  To use your money wisely, you should test with the demographics that mimic those who are already using your app.  Right now, you are hoping to figure out why the people who are using your app are having problems. Of course, the challenges with the app might also be turning off those who are not even logging in.  But, leave that challenge aside right now.

So, start with the types of people who are using your app.  Think of the ways that you can categorize them.  What age are they?  What gender? Education level? Salary level? Are they familiar with technology?  Are they museum visitors? Members?  After making this snapshot of users, then you will need to create a screener that helps you creating a testing sample that mimics your audience.  You might even create a faceted matrix to help you get the right mix of participants.

After that, then you begin thinking about the scenario and tasks, you would want to assess during task analysis.  You will need to try to think of something that is not so prescriptive as to miss challenges and not something that is so broad as hide trends. Try to think of actual scenarios in your institution.  Once you have created your scenario, say, you are a new visitor to the museum looking for ivory sculptures and you have downloaded the app onto your phone, then you need to create a list of tasks.  You want to develop tasks based on items that you have already seen.  Your tasks should help you explore the ways that users employ all facets that you are exploring.

Finally, you will want to make sure to use this task analysis exactly the same with each participant.  In the end, hopefully, you will be able to see trends between each of the users problems.  You might find that everyone is having trouble with the login screen. Or you might find people in a particular demographic have a hard time seeing the exit buttons.

In the end, task analysis is quite useful, because you are creating a systematized way of observing how a number of people use the same digital product.  It allows you to see where there are challenges in order to make improvements.