I have been looking to develop some metrics for a UX project (which was also a question I posted previoiusly), and have been reading about the Kano model and how it maps to different user behaviour/expectations. I was wondering whether it is practical to dissect the functions/features of a product based on user perception about the presence and absence of features (ref here) and tallying up to see if there were more features that generated excited rather than expected or dissatisfied experiences.
There have been proposals of frameworks for evaluating user experience (e.g. Google's HEART framework) but these models do not provide actual metrics or mapping to the components or categories that make up the whole user experience which we want to capture. I have found that the Kano model is one starting point from mapping product functions and features to particular aspects of the user experience, and would like to look for other ways to start filling in the rest of the gaps.
The question is, can the features of a software product or service be classified as one of the three categories and weighted to provide a value/metric for how users feel about a product. And is it practical to track and monitor this over the lifecycle the product to provide one aspect of the user experience?
Answer
Short version:
Yes, you can use Kano Analysis to arrive at a metric for how users feel about a product. Actually, two metrics, but possibly not exactly what you want.
Yes, it is practical to re-test and monitor this metric over the life-cycle of the product. Actually, you should.
The long version... get a cuppa, settle in to a comfy chair...
Derive a metric from Kano Analysis?
The question is, can the features of a software product or service be classified as one of the three categories and weighted to provide a value/metric for how users feel about a product.
You could, however it would be a coarse metric, and it would more properly describe how users feel about the potential of the product.
And it really should be two metrics.
The Kano model is a variation on Herzberg's Two Factor theory of satisfaction, which states that there are certain factors that cause satisfaction while a separate set of factors cause dissatisfaction. That is, satisfaction and dissatisfaction act independently of each other.
To quote Jesse James Garrett: "Reduction of misery does not equate to increase of joy".
So, the two metrics you want to extract is the overall impact on satisfaction, and the overall impact on dissatisfaction.
If you were to try to distill this down to just one metric you should give at least a 3x weighting to all the features which appear to be baseline (aka basic, must-be, threshold, et al). See this TechCrunch article for references to research justifying the 3-5x weighting to give to negative things.
Also, not all baseline features have the same degree of impact, and also for different numbers of users. Kano analysis can help with the latter easily, but the former is only implied in the method.
You could ask an extra question on each feature to evaluate relative impact, or incorporate Choice-Based Conjont analysis. It's not as important as you might think, as nett importance would be determined by multiplying the relative importance by the proportion of users that feel that way. If you have 10% who think feature X is really, really, really nifty ... that's probably not as important as 80% who think feature Y is kinda, yeah, mostly nifty.
It's important to note though that the metrics from Kano Analysis only reflect potential for satisfaction and dissatisfaction. The model is built on the idea that the user's response to a feature varies according to the degree of execution of the feature, with the three main Kano classifications describing the shape of the response line.
So, the classifications and thus the metrics you get from Kano analysis won't tell you how good your product actually is (you need to use other survey methods to get that). However, it will tell you if there's more you can do, and also help you focus your research efforts appropriately.
Track and monitor over time?
And is it practical to track and monitor this over the life-cycle of the product to provide one aspect of the user experience?
Yes. You would gather the quantitative data for Kano modelling by surveying a random sample of your users, and you could do so periodically and thus monitor it that way (that's the beauty of statistical sampling). Just don't keep surveying the exact same sample each time.
If the length of the survey worries you then you can also survey different random samples of users, each with a short sub-set of features. The total number of survey participants would go up of course, but the effort requested of them individually is still low. This works with Kano Analysis since each feature is evaluated independently of others.
You could also refine the set of features you survey over time. If for example you find that 99% of respondents consider Feature X to be of no importance then simply stop asking about it. Also, if your team has come up with some genius idea for a new feature you can throw it in the mix and see it mapped against the other features. If it's a stinker, best to find out early and drop it before expending any further resources on it.
Importantly, you should monitor the Kano classifications of your features over time. Part of the Kano model discusses the fact that feature classifications drift over time, starting out as Delighters, then become Performance features, before settling down as Baseline features.
You want to know when a Delighter becomes a Performer because according to the model that feature will not only contribute to satisfaction but insufficient implementation will now contribute to dissatisfaction. Now, it's important to improve the feature, and not simply rely on mere inclusion to drive overall satisfaction. (Remember when we were all excited about our phones having a camera built in, despite how crappy they were? Phones in cameras are no longer a delighter, they need to be of quality.)
Similarly, when a feature fades from being a Performer and becomes a Baseline you want to reposition the feature - it's no longer something that should be trumpeted and it shouldn't sing and dance .. it should simply do what's required and then get out of the damn way. No fancy gift-wrapping.
You also want to look into whether particular large changes in the market landscape are impacting the importance of various features - a given feature might suddenly accelerate its drift from one classification to another, or even simply just jump right on over to the next classification. Look to geo-political shenanigans for example situations.
Let me wrap up by describing a couple of examples of how I've used Kano Analysis.
Case study 1:
We were building a very complex online application, and there was the usual overly-long list of feature requirements. We needed a way to get a handle on which features are absolute must-haves, and which we could delay for later.
So, we worked up a list of 41 features and then used a panel service to recruit 45 participants for a Kano survey, offering a sizable incentive as compensation.
We crunched the numbers, and this was the result:
The horizontal axis is the proportion of participants that would negatively react to an insufficient implementation of each feature, while the vertical axis is the proportion of participants that would be positively affected by the feature.
The grey dots are features for which most respondents didn't consider important for either satisfaction or dissatisfaction. That list of features got dumped into the backlog real quick. Some were borderline, and we kept an eye on them for later.
The blue dots are the Baseline features. Again, we simplified the design to focus on just the core functionality of each. What would it take to ensure the users would not hate us?
The green dots are the performer features. Those we knew we'd need to pay further attention to. They would have to meet the basic expectations of the users, and from there any further refinement would provide a satisfaction payoff.
The yellow dots are features most respondents considered as delighters. Those features we simplified and scaled back the designs. We also cherry-picked the ones that would be drop dead simple to do and which were real differentiators, knowing that most users wouldn't even notice the other delighters were missing (by definition). The standard we had to reach was What would it take to get the attention of the users?
We didn't distill all that data down to a single metric as we were already getting a lot of value out of the map as is. We could have, if we wanted a very high-level strategic metric (similar to how NPS is a high-level metric).
Case study 2:
This was a smaller and shorter exercise. We had conducted a series of stakeholder workshops and user workshops, and from those we developed a list of ideas for features for a new website. As you know, small samples combined with group dynamics meant we simply couldn't rely on the enthusiasm of the workshop participants to define the importance of each feature, and so we ran a Kano Survey for validation.
This time we used ethn.io to do the recruiting for us. We installed the javascript onto the client site and in less than 3 days we had 73 qualified participants.
This is the graph of the analysis:
See that grey dot down in the bottom left? That was a feature everyone in the workshop thought would be pretty nifty. Turns out computer says no.
The position of a few other features also proved to be food for thought.
No comments:
Post a Comment