Wednesday, August 10, 2016

How can I structure a usability study to produce findings that relate to the long-term use of a tool by an expert user?


I am building a tool that will be used in a call center. The users range from part-time temps to 5 year veterans. I have personas that reflect these.


I have two (quite different) prototypes of a completely new tool I wish to run usability tests on and I wish to determine which of these models to actually build.


Given that this tool will be used all-day, everyday by these users, the question I would really like to answer in this test is 'which design will work better when you have used this interface 1000 times?'


Are there any techniques I can use to help me?



Answer



I don't know of a technique that will help you test designs for high frequency usage without a serious investment in user testing. Assuming that you don't have the time or budget, there are some coping strategies that are worth considering.


The first is to design for what Cooper calls perpetual intermediates. These users are not full on experts, but they are well beyond the beginner stage and are proficient enough with the software. By making them the people you design for primarily, you're likely to satisfy the needs of most users in your target audience.


The second is to evaluate early designs with an eye to getting new people up and running easily, but getting those training wheels out of the way quickly as people grow in their familiarity. Contextual help tips, coupled with the ability to turn all tips on or off, can go a long way in making this happen. Adding training screencasts can also go a long way in helping new users adapt to an interface optimized for regular use.



If possible, bake some analytics into your interface. See if you can get reports on rates of validation errors, triggering of help elements (like a hover tip or other in-place help that you're able to provide), and anything else you can think of that might clue you into usability issues as use in production takes off.


Finally, try to schedule follow up feedback and testing with small groups of users for say one, three and six months into the app's life in production. Get what you can, whether it's more removed like surveys or all the way to one on one interviews; if you plan for and request it earlier, you have a better chance of not only getting results, but also the budget to make fixes that will be based on evidence. Also see if you can get some observation sessions in early on (starting from week 1 or 2); these will likely tell you quite a lot about where people are having trouble, and they have the benefit of being easier to sell management on as you won't be taking anyone away from their work. If you can't observe in person, see if you can dispatch a delegate observer or watch via remote screen sharing. If you can, record the screen sharing sessions for later review.


There are a spectrum of techniques available to you, but few of them will help you accurately predict what people will find efficient and what can be better after 1000 uses. It's one of those things that only real world use will be able to tell you for sure, and the trick will be to find the measuring techniques at that time that will work for your circumstances.


No comments:

Post a Comment

technique - How credible is wikipedia?

I understand that this question relates more to wikipedia than it does writing but... If I was going to use wikipedia for a source for a res...