Reading all the Signs

I remember feeling like my first semiotics class was eye opening.  I had never considered that there could be an order to language or that that there was a science to understanding this order.  Now, all this is a bit of an aside, but I bring it up because there is a parallel with usability testing.  There is both an order to how people act and then a tandem act in which evaluators observe to make sense of what people do.

Video helps this latter act considerably.  Without it, the evaluator will need to scribble notes and inevitably miss things.  With the video, the tester has the audio, including all the textual responses,  the gesture of the mouse, and the facial expressions.  All of these tools are helpful in assessing usability.

The key is for the tester to create a framework where users feel comfortable testing the site, and sharing their ideas.  Once that framework is in place, then one will find very useful information.  But, without it, the user won’t feel comfortable sharing.  A script helps the tester to be assured that they are saying the same thing each time.  But, this script also helps the tester feel ready to put their user at ease.

Once that is done, one has the long task of many sense of the data.  Often wading through all the information is almost as much fun as generating the data.  Interpretation of evaluation data is the process of bringing into order disorder through noticing patterns.  Once the patterns are clear, a good tester then develops a scheme to make sure that these patterns are obvious to anyone who reads the deliverable.

Listening and Hearing

Talking is my occupation. Teaching is in a manner of speaking about talking and talking and talking.  Or, I should say that teaching is about attempting to communicate an idea in multiple ways.  Some of those ways are about your voice, others are about the hearing the voice of others, and sometimes its about reiterating their voice.

In this week, I have found my voice increasing muted by laryngitis, and it has made me think a little about the role of voice in my work, both in teaching but also in evaluation.  it almost seems as if you might not need a voice at all in order to allow your participants to share theirs.  But, really, evaluation and testing aren’t really about just listening, they are about sharing, framing, and positioning as well.  In honor the time spent by participants, one must create a situation that sets up the participant’s experience.

It isn’t just about the words that one says, but also about the tone of voice, the pacing of the things that are said, and even the inherent emotion in the phrases that are said. The evaluator or user experience tester is not unlike a hostess, setting up everything to put their guest at ease.  In a situation that is carefully organized, the participant is then able to share their ideas.

Quantitative and Qualitative Data

Testing and scholarly research are sort of similar. You have a problem, and you want to understand why that problem is occurring, for example.  Both use quantitative and qualitative data. But, in research, you want to be conclusive, exhaustive, and categorical.

In testing, you just want to make the problem better.  So, in that way, in testing, you don’t choose all the ways of understanding the problem, but a few methods.  The key is to make sure to choose methods that actually help you assess the problem accurately.  Success rate, for example, can help you assess if people are accomplishing a particular task. But, what if your goal is that users employ much of your site, then you want to measure how many pages they are viewing.

There is a useful diagram on the Neilsen Norman website that illustrates the ways that particular testing tools relate to behavioral or attitudinal data.  The article also illustrates what issues can be best illustrated by quantitative data, for example, like how much or how often.

Quantitative data should likely be paired with qualitative data.  After all, if you know that most of the people going to your app stop at a certain point, you don’t know why.  It might be because it is so terribly boring, or because it is so terrible interesting.  Or, it could be that the text becomes illegible.  Or…well, it could be anything.  So, pairing the quantitative data, often found in analysis, with qualitative data give you the information you need to understand the problem.

To go back to my original statement, testing help you know enough to fix a particular app or website.  You can make the situation better for the user.  Quantitative and qualitative data are the tools that you use to make these improvement decisions.  But, in terms of scholarship, you would likely need to have many, many more point of feedback to make a categorical assessment.  So, while you might be able to use a small study to fix a particular mobile app, this doesn’t necessarily help you make broad generalizations about all mobile apps.

Tasks, Tasks, Tasks

You might have a problem and a desire to solve that problem but where do you go next. Imagine being in a situation where your museum app is opened regularly but then no other features are accessed, as assessed through analytics.  You know that you need to figure out why this is happening.  What is your first step?

User testing, such as task analysis, can help you understand where challenges are going.  To use your money wisely, you should test with the demographics that mimic those who are already using your app.  Right now, you are hoping to figure out why the people who are using your app are having problems. Of course, the challenges with the app might also be turning off those who are not even logging in.  But, leave that challenge aside right now.

So, start with the types of people who are using your app.  Think of the ways that you can categorize them.  What age are they?  What gender? Education level? Salary level? Are they familiar with technology?  Are they museum visitors? Members?  After making this snapshot of users, then you will need to create a screener that helps you creating a testing sample that mimics your audience.  You might even create a faceted matrix to help you get the right mix of participants.

After that, then you begin thinking about the scenario and tasks, you would want to assess during task analysis.  You will need to try to think of something that is not so prescriptive as to miss challenges and not something that is so broad as hide trends. Try to think of actual scenarios in your institution.  Once you have created your scenario, say, you are a new visitor to the museum looking for ivory sculptures and you have downloaded the app onto your phone, then you need to create a list of tasks.  You want to develop tasks based on items that you have already seen.  Your tasks should help you explore the ways that users employ all facets that you are exploring.

Finally, you will want to make sure to use this task analysis exactly the same with each participant.  In the end, hopefully, you will be able to see trends between each of the users problems.  You might find that everyone is having trouble with the login screen. Or you might find people in a particular demographic have a hard time seeing the exit buttons.

In the end, task analysis is quite useful, because you are creating a systematized way of observing how a number of people use the same digital product.  It allows you to see where there are challenges in order to make improvements.

Formative vs. Summative

Do you have that sweet or salty conversation with people?  For your information, I am salty.  If you actually know me, this is not a surprise.  I reading about formative testing versus summative testing, I have been trying to really understand when each is best.  Is this more personal preference on the part of the tester or is it due to the requirements of the client?

Formative testing invites users to talk through choices.  It is useful for its low-tech implementation, and effective for gaining quick insight.  But, there is the challenge of having an intercessor there.  In the end, it is cost-effective, particularly when you don’t have a working prototype.  But, summative testing is useful in seeing if what you have actually works.  Additionally, if done remotely, there isn’t a moderator to intercede.

In terms of personal preference, I really like formative testing, for its mix of qualitative and quantitative data.  But, I also believe that it isn’t really a personal preference thing.  This is not so much if you are inherently sweet or salty, but rather where in the meal you, as a consultant have shown up.  If you are invited at the beginning you get to choose, and you might choose the one that you prefer.  But, often, you show up after the meal has been ordered, and it is already being cooked.  As such, you can taste the soup on the stove, and then offer suggestions for improvement, but you don’t get to say which ingredients shouldn’t go in the pot.  Or more simply,  often, you are going to be invited to look at an interactive or website that is already made, and so summative evaluation is the best choice for the client.

User Testing vs Research

When I think of the term ivory tower, I have a very clear mental image.  A glistening white tower, rectilinear in its aspect, is poised atop a rocky outcropping, on a lonely island.  The beach, an access point to the tower, has a pier on it.  Museums are like that beach.  There are in the same vaginitis as the ivory tower.  They have the same zip code, if you will.  But, they look drastically different, and their level of access is incredibly different.

Museums sit on this interstitial point between academia and so many other things: leisure spaces, K-12 classrooms; studio classes; edutainment.  In terms of understanding visitors and the types of digital interpretation that they produce, they can learn some things from both disciplines.  First, museum studies and information science both offer fruitful research that can inform practices.  But, second, research and user testing are not the same thing.  Research is in depth and large scale.  Research is often predicated on big numbers in order to be able to demonstrate statistical significance.  In museums, run by people with graduate degrees earned through rigorous research and rousing defenses, there is an important role for this type of visitor research.

But, user testing is a different sort of animal.  It is something that can be done in one day.  It can employ as few as three people to demonstrate a trend.  In other words, you are not writing a full report to show to the board of trustees.  Testing help you keep the digital project going, and make sure you are on the right track.  User testing is a check and a balance rather than chapter and verse on your project.

JOINT STATEMENT FROM MUSEUM BLOGGERS & COLLEAGUES ON FERGUSON

JOINT STATEMENT FROM MUSEUM BLOGGERS & COLLEAGUES ON FERGUSON

The recent series of events, from Ferguson to Cleveland and New York, have created a watershed moment. Things must change. New laws and policies may help, but any movement toward greater cultural and racial understanding and communication must be supported by our country’s cultural and educational infrastructure. Museums are a part of this educational and cultural network. What should be our role(s)?

Schools and other arts organizations are rising to the challenge.University law schools are hosting seminars on Ferguson. Colleges are addressing greater cultural and racial understanding in various courses. National education organizations and individual teachers are developing relevant curriculum resources, including the#FergusonSyllabus project initiated by Dr. Marcia Chatelain. Artists and arts organizations are contributing their spaces and their creative energies. And pop culture icons, from basketball players torock stars, are making highly visible commentary with their clothes and voices.

Where do museums fit in? Some might say that only museums with specific African American collections have a role, or perhaps only museums situated in the communities where these events have occurred. As mediators of culture, all museums should commit to identifying how to connect to relevant contemporary issues irrespective of collection, focus, or mission.

We are a community of museum bloggers who write from a variety of perspectives and museum disciplines.  Yet our posts contain similar phrases such as  “21st century museums,” “changing museum paradigms,” “inclusiveness,” “co-curation,” “participatory” and “the museum as forum.”  We believe that strong connections should exist between museums and their communities. Forging those connections means listening and responding to those we serve and those we wish to serve.

There is hardly a community in the U.S. that is untouched by the reverberations emanating from Ferguson and its aftermath. Therefore we believe that museums everywhere should get involved. What should be our role — as institutions that claim to conduct their activities for the public benefit — in the face of ongoing struggles for greater social justice both at the local and national level?

We urge museums to consider these questions by first looking within. Are staff members talking about Ferguson and the deeper issues it raises? How do they relate to the mission and audience of your museum?  Do you have volunteers? What are they thinking and saying? How can the museum help volunteers and partners address their own questions about race, violence, and community?

We urge museums to look to their communities. Are there civic organizations in your area that are hosting conversations? Could you offer your auditorium as a meeting place? Could your director or other senior staff join local initiatives on this topic? If your museum has not until now been involved in community discussions, you may be met at first with suspicion as to your intentions. But now is a great time to start being involved.

Join with your community in addressing these issues. Museums may offer a unique range of resources and support to civic groups that are hoping to organize workshops or public conversations. Museums may want to use this moment not only to “respond” but also to “invest”in conversations and partnerships that call out inequity and racism and commit to positive change.

We invite you to join us in amplifying this statement. As of now, only the Association of African American Museums has issued a formal statement (show link) about the larger issues related to Ferguson, Cleveland, and Staten Island. We believe that the silence of other museum organizations sends a message that these issues are the concern only of African Americans and African American museums. We know that this is not the case. This is a concern of all Americans. We are seeing in a variety of media – blogs, public statements, and conversations on Twitter and Facebook — that colleagues of all racial and ethnic backgrounds are concerned and are seeking guidance and dialogue in understanding the role of museums regarding these troubling events. We hope that organizations such as the American Alliance of Museums; the Association of Science-Technology Centers; the Association of Children’s Museums; theAmerican Association for State and Local History and others, will join us in acknowledging the connections between our institutions and the social justice issues highlighted by Ferguson and related events.

You can join us by…

  • Posting and sharing this statement on your organization’s website or social media
  • Contributing to and following the Twitter tag #museumsrespondtoFerguson which is growing daily
  • Checking out ArtMuseumTeaching which has a regularly updated resource, Teaching #Ferguson: Connecting with Resources
  • Sharing additional resources in the comments
  • Asking your professional organization to respond
  • Checking out the programs at The Missouri History Museum. It has held programs related to Ferguson since August and is planning more for 2015.
  • Look at the website for International Coalition of Sites of Conscience. They are developing information on how to conduct community conversations on race.

MCN 2014: Performative Participation and Diversity

This the last of my wrap-up posts on MCN. I also storified my notes, so they don’t complete disappear into the ether of my Twitter feed.

Race, culture, and socio-economic class also loomed large for me at #MCN2014. Certainly, the wonderful Ignite helped move me towards that conversation. But, given my own professional labors in community engagement, outreach, and action, I was particularly receptive to these conversations. In the days before the Ferguson grand jury was announced, perhaps race was foremost in our consciousness. But, for me, the issue has been ever present. Museums receive funds from organizations that are eager to “impact the diversity of audiences.” Diversity is just one such coded term. (Community is another common one.) This phrase is a sort of catch phrase for something very specific. The actual meaning of diversity could be said to be a mix. An alien newly arrived at Earth might rationally state that diversity could include a mix of ages, genders, socio-economic classes, and races. But, diversity in the museum context is more often a coded term for something specific. In many regions, this means African-American; in some, Latino. Generally, museums are attempting to bring in the poorest denizens of their region.

The challenge is that the impetus for such initiatives is altruistic. Certainly, there are major implied barriers in museums. Breaking these barriers is incredibly challenging. They are invisible to most average visitors and staff members. They are felt by those who not feel welcome for their background, education-level, race, ethnicity. While invisible, they are very real. Diversity initiatives, in their best forms, are about finding useful ways to create chinks in these barriers. Museums have certainly been guilty of paternalistically planning the best programs for an intended group. But, now, museums have started to do much better. Ideally, these initiatives are done in a shared manner, working with those in the target group.

Yet, we still find ourselves carefully employing works like diversity and community, knowing full well that we have much more discrete meanings. As a field, we do need to have more honest terminology about race and ethnicity, power and authority. Now, given the state of race in America, museums are not alone in our inability to discuss race and class honestly. But, rather than trying to be just as good as the rest of the messed up conversations, museums have an opportunity to do better. We are not schools. We are not politicians or government officials, mostly. We are in a limnal space. We have dinosaurs and sculptures and butterflies and beautiful paintings. We have the best of human innovation and the most magnificent aspects of the natural world in our halls. We house the universals of existence. In other words, we are universal, and so in the unique position to move the conversations about race and power forward. We can push past banal, tentative discussions about diversity and community and into a phase where we can honestly deal with race.

MCN Recap 2014: Open Authority/ Shared Access

Open-authority, shared-authority, open-access, shared-access was another theme that seeped through many of the conversations at #MCN2014. People all over are now finding/ demanding transparency of organizations and even governments. If ISIS has annual reports about their reports, then shouldn’t museums? But, in what ways can museums open up access while at the same time maintaining their core competency, collections interpreted in reputable ways.

Yet, what is the term for allowing other others into our community of practice? In my mind, open access is the most reasonable term. Sharing is hard; my daughters, and their dolls, can attest to that. Sharing has the baggage of loss associated with it, mostly loss of power. The fear for many museums is that shared production of content would result in a devaluation of the core brand. Yet, many of our collections could actually profit from citizen interpreters. Think of how you might remember an amazing story about the museum coin fountain in your childhood museum, like when your friend waited until the guard’s back was turned to stand upon in. Or, more seriously, if objects of your faith are housed within a museum collection, your perspective might truly transform the way that the institution understands that collection.

Open access is a term that implies transparency, which in its own way might feel frighteningly honest. But, openness doesn’t mean losing ground or power. From the point of an institution, open access might be the least frightening. It is about bringing your arcane knowledge into the open, but it doesn’t necessarily mean that you lose all your power. You are offering something but not everything.

MCN 2014 Recap: Nomenclature/ Content

There were a number of fruitful conversations about nomenclature this year at MCN. I thought I would write down some thoughts should I hope to remember them in the future. As a practitioner, I understand the desire to just get the work done rather than focus on the details of naming the work you are doing. But, conferences offer a useful juncture to break with active practice and instead focus on fruitful reflection on said practice.

Content

One major topic was what the ideas being generated should be called. Content is a term that is field agnostic, used by marketers and museums alike. But, similarly, the backlash to the term is equally shared across fields. Angst and straight up hatred of the term interpretation was also expressed in conference hallways. There were the story folk, who felt at essence like Homer, we are but spinners of tales. Idea-men also shared their thoughts about the correct term. For all of this discussion, I wonder if the issue remains the uneasy relationship between information and delivery. We are in the midst of a point where the revolutions of delivery methods were hot and heavy in our minds. Most museum professionals remember a time when you still needed a computer to access the internet. The aged amongst us remember the zzhhhding of the dial up. (I wished a word had been coined for that sound.)

In many ways, we are still enthralled with the idea of mobile, and so it looms as large as the information being conveyed. I wonder if in the first years of the printing press people sat amazed that their books required not a single scriber’s labor. To me, the see-saw screams that content is king or that design is the point are really manifestations of our residual excitement about the media. While we might cite the printed page, we mostly refer to the actual story.

In terms of content and interpretation though, another underlying issue is about the democratization of information/ text creation. More and more museum professionals are responsible for offering visitors inroads into the collection, either through text or imagery. Videographers, digital media professionals, social media professionals, and educators all share the work of making collections accessible. This onus is one that these professionals welcome, but one that certain traditionalist might not quite want to part with. Therefore, finding the right word for our interpretative labors is a political act—one in which we are advocating for our value.

Follow

Get every new post delivered to your Inbox.

Join 25 other followers