Category Archives: usability

Usability for Users; Consumerability for Consumers?

Usability is one of those words that has a faint jargon-style feeling to it. In pitching the power of eyetracking, card sorts, and participant design, you are wisest to avoid all those terms.  These are terms that alienate your clients.  As John Rhodes discusses in Selling Usability, focusing on the customers, rather than the testing, will help people understand the end goal of testing.

To get to that goal, you will need to design a test, perform the test, get results, analyze the rests.  After all of that, you will then need to make sense of the data.  With eye tracking, for example, you will need to help make sense of heat maps.

Visualizations, when interpreted well and correlated with think aloud information, can translate data into meaning. A final report puts everything together creating meaning out of data.  In the end, usability could be said to be the study of users and interfaces. But, you could think of it as understanding customers or consumers, and then finding a way to help your clients see what you have come to understand.

Eyetracking

I am a starer.  It doesn’t help that my eyes are on the large side.  Yesterday, sitting in the airport, I was struck by how many people assumed I was looking at them, when instead I was just staring out into space.  So, I have a natural bias to question eye-tracking studies.  But, there is a real difference between the ways that your face (and your eyes) react when in open-ended learning situations and in information seeking moments.

In websites and mobile devices, you are using these tools for a certain end.  You are seeking something specific.  Much of your interactions with the interface could be summarized by the phrase, “how do I get to the next place, page, part, link, etc.”  In other words, your gaze is often the moment before you take a navigational step.

Eye-tracking studies have real promise in understanding usage in an unmediated way.  Even the smoothest researcher is putting their participant on the spot.  In this case, the participant is acting in a somewhat normal way.  Tools, like the Tobii, do require participants to sit very still–which is not terrible real-world.  But, at least, they are not being artificially prompted by a person.

Eye-tracking studies are not just about where people look, but also understanding this in correlation to time.  What did they look at first? What are the patterns of things they looked at? What didn’t they look at?  In other words, one is assessing behavior.  This can then be correlated with attitudinal data, from their talk alouds, for example.  But, at the core, eye-tracking is about behavior.

More Mobile Testing

I have continued to ruminate on mobile testing.  In thinking about the pervasiveness of mobile, getting mobile right is imperative.  But, at the same time, the testing options have major limitations.  After all, no one actually hugs a laptop while searching for the ideal episode of Gilmore Girls on Netflix on their surface.  And, they probably don’t use a sled when flipping through Pandora on their iPhone.  Most testing scenarios just don’t mimic the real world.  In fact, they are very different from the real world.

It makes me more sure that there has to be someone out there who can create the ideal mobile testing software.  The big challenge with this is that fact that there are many different types of mobile.  There is iOS; both phone and table.  There are Androids and then there are the Windows tablets.  Given the diversity, one might need to create a number of different mobile testing systems.  (Apple has a vested interest in locking down their system.  They have a controlled access mechanism, i.e. with their developer program.)

Mobile Testing

Mobile is ubiquitous. We use phones to check the weather, to read the paper, and take pictures.  There are now more phones that adults on Earth.  Despite the complete diffusion of Mobile, there are still challenges to creating ideal mobile experiences.

Testing remotely has some powerful pluses. Being a fly on the wall helps you understand the unmediated, natural course of actions of your users. Services like Loop11 make remote testing on a computer easy. But, there isn’t a perfect solution on mobile.  Many resourceful testers have figured out work arounds to capture similar feedback.

It does make me feel like a resourceful entrepreneur needs to figure out a way to do remote testing of mobile apps in the way that one uses Loop11. After all, remote testing is a way to understand how people might really use something.

 

 

 

 

Unmoderated Remote Testing

Remote testing is incredibly useful for websites.  After all the worldwide web is just that–Global.  Remote testing means that one can get feedback unencumbered by location of participants.  Rather than intercepting people physically, one can grab people as they go about their business on the site you are testing, for example.  Recruitment is no longer bound to location. And, with sites like Loop11, it is super easy to recruit users.  Just one link, and you are ready.  Without the need for a synchronous appointment, you can rack up numerous user rests.

There are drawbacks to remote testing.  The most important is that one loses much in the way of emotion, expressions, and verbal feedback from users.  This can make it challenging to understand the reasons that users click the buttons they click.

However, remote user testing can offer high volume feedback and identify trends.  In other words, while you might not be able to say why someone did something, you can pretty clearly say categorically that certain trends are obvious.

Moderate User Testing

Moderated User Testing is a useful way for testers to work with users who are not in the same location as themselves.  There are certain challenges, such as passing on incentives, but at the same time there are enormous benefits, such as being able to reach testers globally.

For the tester, videotaping the session is essential.  Moderated user testing can capture facial expressions and user quotes, but it is often challenging to read and assess all of that in real time.

One drawback is that appointments need to be made to run the test.  This isn’t an asynchronous experience, in other words.  Scheduling something with a remote tester can be challenging, and in certain projects one might find that you have a lot of no shows.  So, this could have the challenge of being time consuming.

But, once everything is captured, even with a small subset of users, one can gain quite a lot of feedback, particularly attitudinal feedback.  Moderate user testing is also useful in that it allows for the correlation of attitudinal feedback and behavioral feedback.

Testing from Afar

Testing can be incredibly useful–even essential to rolling out an new product.  But it can be cost-prohibitive.  Small firms might not have the resources to find the right users, employ testers, set up a room with specialized one-way glass, etc.  Of course, people do testing in this way for important reasons: if you have a specialized set up, you need to have the testers on-site.

But, often remote research has significant advantages.  If you are developing a website with global reach, testing remotely allows you to create a diverse testing base. You save money in terms of space and set up.  Newer tools allow testers to remote in, see all the key strokes, but also the facial expressions of the testers.

Remote testing isn’t without its challenges.  If you connection to your remote tester fails, you are out of luck.  You might not be able to observe facial expressions clearly with the interface.  You do have to find a way to send incentives to remote testers.  And, you might  get push back from your stakeholders as remote testing isn’t universally accepted.

Even with the possible downsides, the significant positive points make remote research an important tool for user experience researchers.