Written on March 5, 2009 by Michael Lascarides
Our home-grown usability survey launched a couple of weeks ago and it’s spent most of the intervening time quietly gathering a ton of feedback from our users. Since there’s been a lot of interest around the app, I’d like to share some of the findings so far. First, some top-line stats:
Usability questions posted: 78
Total unique repsondents: 2,410
Total questions answered: 26,067
Average questions answered per person: 10.8
Average responses per question: 334
Average responses per day: 1,184
Obscenities submitted: 2 (Incredibly, the two nasty words showed up within the first 10 minutes of launch–thank you, New York! Even more incredibly, we haven’t had another since.)
The response has been terrific, with roughly a 2% click-through rate from the main site, and 80% of those visitors answering at least one question. In fact, feedback has been so plentiful that we have twice pulled the report because we had enough responses and needed to write new questions.
We’ve added a number of generic demographic questions to the mix (how old are you, where do you live, etc.) and the hope is that in future versions, we will be able to segment responses to one question based on the answers to another question. For example, we can test familiarity with certain terms in one question and segment out those responses by age (for any respondents who answered both questions).
Behind the scenes, there are definitely some improvements that need to be made. It’s becoming clear that a frequent pattern of use is to test the same question (”Where would you click to…?”) over screen-shots of several variant designs. Right now, one must enter the same question repeatedly to get these comparisons. Then once the screenshots are uploaded, it would be helpful to view the results for all variant designs for one question on a single page.
One improvement that has been made is a statistical grid overlay that includes absolute click-counts and percentage of total clicks on each segment of a 40×40 pixel grid. This became a necessity for legibility once the number of responses grew into the hundreds, and the clouds of individual click-marks condensed into unintelligible blurs. (We also added a button to hide all click marks and stats grids to just see what’s underneath!)
So enough about the app itself–what have we learned? Well, one big lesson is that our global navigation designs with fewer choices vastly outperform those with more. Our first draft of the new global navigation had eight elements, but user testing quickly found some gaps and ambiguities. For example, we tend to think of the NYPL Digital Gallery as a kind of special project, so we were testing navigation items named “NYPL Projects”, “Online Resources” or “Online Collections” as a destination page for DG and similar initiatives. But testing showed that users don’t think it’s that special; they think it’s just another way to find things. When presented with one of those choices, a significant number of users still clicked “Books and Materials” as the place to find the image collections. Removing the “special projects” link (whatever its name) led to a user consensus on “Books and Materials” as the go-to destination.
We’ve also been able to ask “little questions”, something that’s not practical in a more formal survey. For example, we’re playing with the idea of a “Community” section as a destination for social networking initiatives (blogs, our iTunes page, our Facebook group, user-generated content and more), but we’re struggling with what to name it. We liked “Community” as a label, but tests showed no one was clicking on it so we quickly added a handful of follow-up questions to get to the bottom of what “community” means to our users, like whether people use the word “neighborhood” or “community” to talk about where they live (it’s 9-to-1 in favor of “neighborhood” in New York, by the way). The question “What would you MOST expect to see on a page labelled ‘Community’?”, coupled with a few set choices followed by an open text field, garnered the most free-form responses of any question so far. We haven’t cracked the Community code yet, but we’re getting closer. The ability to quickly get follow-up questions (even seemingly trivial ones) in front of users is proving to be incredibly insightful.