Applying the Kano model to improve UX: the Kano model is a tool for visualizing the relationship among features in a meaningful way.
In the 1980s, Noriaki Kano--a professor and consultant in the field of quality management--observed how customer satisfaction is determined by whether the service is good and if it meets customers' expectations about what it should be. We can suss out these expectations and plan features that satisfy them (this know-your-users paradigm is central to user-experience design). However, features play off one another, and one that's poorly implemented negates the benefits of the others.
It's a nuance with real impact on the success of new products or services libraries introduce. Whether it's a new resource or a makerspace around the corner, a carousel on the homepage or a discovery layer, a gallery on the second floor or new group study rooms on the third, good user experiences correlate not to one standout feature, but to the blending of many. All ships rise or sink with the tide.
The Kano model is a tool for visualizing the relationship among features in a meaningful way. We use input about features through a simple user study to model what Jan Moorman calls an "emotional response 'profile'" (uxmag.com/articles/ leveraging-the-kano-model-for-optimal-results). When applied to an existing service to align its performance--such as usage for an electronic resource--with the impact that feature satisfaction has on its success, the model can be used as a targeted road map for pushing the meter into the green. But as part of upfront generative research, the model is a powerful predictive tool that shapes the creation of the service.
In this article, I will introduce you to the Kano model as a tool you should use to improve your results. Let's start with an empty graph (see Figure 1).
[FIGURE 1 OMITTED]
Satisfaction and Investment
It's easiest to imagine this as a 2D grid in which customer satisfaction is measured from bottom to top. At rock bottom is frustration, raw and roiling. The middle ground represents some level of indifference, which swells into satisfaction, then transcends into delight.
We measure our investment (time, effort, or resources) from left to right. Our costs increase as we move further into the service's lifecycle. As time goes on, it requires more to improve its quality. Each newly implemented feature takes a cumulative toll, and just as much as the Kano model visualizes our patrons' response, we use it to determine the order in which features are added.
The impact of a feature's performance is measured along the diagonal (see Figure 2). As you surf the web, you'll see this called either "performance" or "performance payoff." However, I think maybe "performance impact" better describes this metric. Consider this: When you add a feature to something, you could have either implemented it poorly or well. In making new signage in your library, your signs can have been professionally done or written in crayon. Sometimes, this matters.
[FIGURE 2 OMITTED]
At the point where our feature appears on the graph, the performance diagonal describes how much its implementation matters. The feature that costs a lot and has the potential for generating a ton of excitement can hurt the library more, if we flub our approach.
Emotional Response Types
In a 900-person study, Kano substantiated that customers responded to features in five different ways. These, as our model takes shape, can be largely represented as curves across the quadrants we see emerge from behind the axes.
There are a set of basic expectations for each service, which don't form in a vacuum. Instead, they are influenced by things our patrons use throughout their day. The par for markets in which there is a lot of competition--such as video streaming (e.g., Netflix and Amazon Prime)--is much higher. Must-have features are those that determine the "completeness" of a service.
My favorite analogy is about word processing by way of Jared M. Spool (articles.uie.com/kano_model). Bear with me.
Let's pretend Springshare (LibGuides, etc.) boldly enters the word processing game with a hot, new library-specific writing tool: LibWord. Library staffers are thusly empowered to bold text and change fonts. Paired with Oculus' Rift, LibWord becomes the best-in-show writing tool, as it allows folks to traverse the written word in virtual reality. Your poetry never looked so cool.
But what if the company forgot to add a "save" button? The ability for writers to save their progress is a must-have feature. Its inclusion doesn't garner Springshare any kudos. People don't really care that they can save their work--until they can't. The failure to include (or include well) a must-have feature is immediately dissatisfying and negates the positive impact of other features.
Consider the library's reference service, Ask a Librarian. While it has largely moved onto the web, it's predicated on decades of reference services provided over the phone. We might find that not including support in this way creates a lot of frustration, such as when a patron calls from the busy road--"Hey Siri, call the library"--but can't get through.
On our model, the curve of these basic expectations rarely transcends above neutral satisfaction, so embellishing on must-have features has low ROI (see Figure 3). There aren't many ways to jazz up a phone call or improve on the save button.
[FIGURE 3 OMITTED]
On the flip side, attractive features are what set services apart and generate buzz and satisfaction. They're your service's "oh, neat" moments. And since no one really expects them to be there--and maybe patrons can't even imagine one until you introduce it--there is no fallout for not including an attractive feature.
Attractive features that also happen to require minimal investment are excitement generators. It's what they do. They're what differentiate a service from one that works to one that's delightful; they're opportunities to hook patrons and inspire word-of-mouth endorsements (see Figure 4 on page 30).
The part there about excitement generators being--at least, at first--low-investment is really important.
[FIGURE 4 OMITTED]
The Kano model isn't a snapshot; it's a mirror. It reflects the current condition. There was a time when offering online chat as a library service was an attractive feature. It was as new as it was useful. Not only did other libraries implement the same feature, but it became commonplace throughout all markets and became expected. Now, reference services that fail to include some form of chat arguably come off as incomplete.
Over time, attractive features become basic expectations (see Figure 5).
[FIGURE 5 OMITTED]
Excitement generators are important opportunities because they present longer-term opportunities. As their impact on satisfaction diminishes, we must put more work into them to maintain their attractiveness. And once they migrate into basic expectations, libraries just don't remove them--they support them forever. High-cost attractive features are already near the end of their lives. They are stars preparing to go nova.
A 1D feature tracks along the performance diagonal in that, similar to basic expectations, not including it (or including it poorly) generates dissatisfaction. Unlike basic expectations, the better it is implemented, the more satisfaction it generates (see Figure 6).
I think chat is probably here, if I am honest. There are still opportunities to boost chat into attraction in terms of its aesthetic and ease of use.
[FIGURE 6 OMITTED]
Patrons don't care whether indifferent or unimportant features are there. They never will (see Figure 7 on page 31).
Undesirable features drag the positive impact from attractive and ID features down. Including them is to the detriment of customer satisfaction overall (see Figure 8 on page 31). These are often not totally out of left field that, on the graph, have substantial drop off, but--in my experience--are on a shallow slope.
This may be a chat service's proactive feature: where, on any page of the library website, a little notification slides out from the bottom of the screen after a few seconds to ask, "Can we help you?" Undesirable features can often be gleaned through user research; if a library conducts usability tests, it might see that the feature thought to be helpful actually distracted the user from his or her task and, what's more, annoyed him or her.
[FIGURE 7 OMITTED]
[FIGURE 8 OMITTED]
How to Measure Responses
Kano created a reliable and straightforward method for measuring users' reactions to the presence of a feature. There is some footwork involved for library staffers who are tasked with administering the survey, but the ease of the methodology makes this model convenient.
1. Introduce your service to your user one feature at a time.
2. Ask two questions:
* How would you feel if this feature were present?
* If it weren't, how would you feel?
The response to each feature is sussed out through this positive/negative question pair, which we use not to rate features on a scale, but to visualize a sense of expectation. How we introduce the feature in our question matters. While it's necessary to demonstrate or otherwise explain it in as much detail as needed, cramming too much into the question can be confusing. We will ask these questions--for every feature we're thinking about implementing--to about 20 to 30 users.
They can respond in one of five ways:
1. I like it.
2. I expect it.
3. I am neutral.
4. I can tolerate it.
5. I dislike it.
We then use these reactions to identify where our feature fits within the response categories previously mentioned and use Table 1 to see where question-and-answer pairs intersect.
Let's pretend we asked users about the proactive chat feature of our Ask a Librarian service. Remember, this is the feature that determines that our patron has been on a page of the website for a certain amount of time; a notification slides in from offscreen to ask whether he or she needs help. Here's how the majority of our respondents answered:
1. How would you feel if this feature were present? "I can tolerate it."
2. If it weren't, how would you feel? "I am neutral."
When I shade this in, the table suggests that the mixture of a feature that is tolerated when present and is neutral when it is not is an unimportant feature.
In 2013, Kevin Smith made a neat addition to the evaluation table (blog.teamtreehouse.com/the-kano-model); in his write-up about the Kano model for Treehouse, he color-coded it (see Table 2 on page 32). The interpretation's the same, but in my experience, this is a useful trick for making a somewhat stoic text-heavy table more visually impactful.
We mute indifferent responses and highlight the rest with a sort of a traffic-light spectrum from green to red: do and don't do. Basic expectations--must-have features--need to be met, so they are the most vibrant green. I colored the ID features orange, because their implementation requires care; poor design or flawed functionality might tip these into the red.
At this point, features don't need to be mapped on the 2D grid; however, it can be a useful visual (see Figure 9).
[FIGURE 9 OMITTED]
In this example--which doesn't represent real data--I took some of the features of a remote/virtual reference service (Ask a Librarian) and placed them in relation to their impact on satisfaction, as well as their estimated cost or difficulty to implement. The fact that the service is offered over the phone, via email, or through a contact form doesn't garner much esteem--but these are basic expectations with relatively low investment.
Libraries tend to offer these services through text too, which requires some form of literal cost to the user and the organization, but it has almost no impact on customer satisfaction. It is on our table as an indifferent feature overall. And while there may be fans, the average response suggests that the library is wasting time, effort, and money to support it.
Proactive chat is a ID feature that, if implemented well, could both drive engagement and allow for an element of delight. Imagine a patron thinking, "How did it know that I needed help?! Wow! Just in time." However, there is a technical debt to pulling this off without a hitch, and in most cases--and maybe you can relate--proactive chat tends to be annoying or distracting.
Your results are a clear indicator of either the priority in which you build the service or where you start in fixing an existing one:
1. Implement all must-have features.
2. Avoid or reduce all undesirable features.
3. Add excitement generators (low-investment attractive features).
4. Implement ID features with care.
5. Create attractive features that make sense and that you can afford.
In the event there are a lot of "ties" or too many must-have features or opportunities to implement within the allotted time and within budget, the questionnaire can be paired with additional methods--such as a Likert scale ranging from "not at all important" through "important"--to weigh one otherwise equal feature against another.
It can also be useful to value a certain type of user ("persona") first. A basic prioritization--say, early adopters who have definitely been asking for this service over non-adopters--makes sense. This is a marketing strategy: By appealing to early adopters during the first version of the service, it will create the word of mouth needed to hook later adopters, which will sustain usage over time. It's beyond the scope of this write-up, but it's something for the utility belt.
The Kano model offers library administration a really quite smart decision-making framework for allocating the already limited budget and talent for the biggest impact. And especially in a climate in which libraries are experimenting with ways to increase usage and appeal to their patrons, it's a useful tool for avoiding duds--which are prevalent--and overpriced vendor services--which are also prevalent. Particularly, the model requires little training and no real budget, and it can be easily coupled with other methods for improving the user experience. In the end, it's the user experience (UX) that matters. As we say, good UX is good business.
Michael Schofield (firstname.lastname@example.org] is a librarian and developer who evangelizes the practical application of user experience design to push libraries forward. Schofield has published several articles and speaks often in this field. In 2014, he co-founded LibUX, which is a resource for just this sort of thing. In 2015, Schofield won an ACRL Instruction Section Innovation award for his work on Library Learn, which is Nova Southeastern University Libraries' instructional video platform.
Table 1 Positive Question Negative Question If a feature were If a feature weren't present, present, how how would you feel? would you feel? Like Expect Neutral Like Attractive Attractive Expect Undesired Unimportant Unimportant Neutral Undesired Unimportant Unimportant Tolerate Undesired Unimportant Unimportant Dislike Undesired Undesired Undesired Positive Question Negative Question If a feature were If a feature weren't present, present, how how would you feel? would you feel? Tolerate Dislike Like Attractive: One Dimensional Expect Unimportant Must-have Neutral Unimportant Must-have Tolerate Unimportant Must-have Dislike Undesired Table 2 Positive Question Negative Question If a feature were If a feature weren't present, present, how would how would you feel? Like Expect Neutral Like Attractive Attractive Expect Undesired Unimportant Unimportant Neutral Undesired Unimportant Unimportant Tolerate Undesired Unimportant Unimportant Dislike Undesired Undesired Undesired Positive Question Negative Question If a feature were If a feature weren't present, present, how would how would you feel? Tolerate Dislike Like Attractive One Dimensional Expect Unimportant Must-have Neutral Unimportant Must-have Tolerate Unimportant Must-have Dislike Undesired
|Printer friendly Cite/link Email Feedback|
|Title Annotation:||user experience|
|Publication:||Computers in Libraries|
|Date:||Oct 1, 2016|
|Previous Article:||Internet Librarian 2016: The Internet Conference and Exhibition for Librarians and Information Managers: CELEBRATING 20 YEARS.|
|Next Article:||Content in motion.|