Monthly Archives: January 2015

Formative vs. Summative

Do you have that sweet or salty conversation with people?  For your information, I am salty.  If you actually know me, this is not a surprise.  I reading about formative testing versus summative testing, I have been trying to really understand when each is best.  Is this more personal preference on the part of the tester or is it due to the requirements of the client?

Formative testing invites users to talk through choices.  It is useful for its low-tech implementation, and effective for gaining quick insight.  But, there is the challenge of having an intercessor there.  In the end, it is cost-effective, particularly when you don’t have a working prototype.  But, summative testing is useful in seeing if what you have actually works.  Additionally, if done remotely, there isn’t a moderator to intercede.

In terms of personal preference, I really like formative testing, for its mix of qualitative and quantitative data.  But, I also believe that it isn’t really a personal preference thing.  This is not so much if you are inherently sweet or salty, but rather where in the meal you, as a consultant have shown up.  If you are invited at the beginning you get to choose, and you might choose the one that you prefer.  But, often, you show up after the meal has been ordered, and it is already being cooked.  As such, you can taste the soup on the stove, and then offer suggestions for improvement, but you don’t get to say which ingredients shouldn’t go in the pot.  Or more simply,  often, you are going to be invited to look at an interactive or website that is already made, and so summative evaluation is the best choice for the client.

User Testing vs Research

When I think of the term ivory tower, I have a very clear mental image.  A glistening white tower, rectilinear in its aspect, is poised atop a rocky outcropping, on a lonely island.  The beach, an access point to the tower, has a pier on it.  Museums are like that beach.  There are in the same vaginitis as the ivory tower.  They have the same zip code, if you will.  But, they look drastically different, and their level of access is incredibly different.

Museums sit on this interstitial point between academia and so many other things: leisure spaces, K-12 classrooms; studio classes; edutainment.  In terms of understanding visitors and the types of digital interpretation that they produce, they can learn some things from both disciplines.  First, museum studies and information science both offer fruitful research that can inform practices.  But, second, research and user testing are not the same thing.  Research is in depth and large scale.  Research is often predicated on big numbers in order to be able to demonstrate statistical significance.  In museums, run by people with graduate degrees earned through rigorous research and rousing defenses, there is an important role for this type of visitor research.

But, user testing is a different sort of animal.  It is something that can be done in one day.  It can employ as few as three people to demonstrate a trend.  In other words, you are not writing a full report to show to the board of trustees.  Testing help you keep the digital project going, and make sure you are on the right track.  User testing is a check and a balance rather than chapter and verse on your project.