There are many different organizations starting to collect usage statistics in an attempt to improve the usability or user experience of their software. One of the common frustration when designing an application for a client or organization is that they have no idea of the typical screen resolution of their users, and therefore in most instances you have to resort to a 'fluid/response' design that doesn't necessarily cater very well for any screen sizes. Most of the statistics for display resolution comes from browser data, which doesn't necessarily reflect the organizations that use your application. Therefore, I think it is quite important for software to collect usage statistics to help improve all aspects of the design.
However, when a screen prompt asks you to consent to anonymous user data collection, I think many people would probably not participate even though collectively it may be in the best interest to do so. Are there any ideas or examples where anonymous user statistics reporting rate can be improved, whether it is through the design of the interface or a better wording?
The Steam/Valve Hardware and Software survey in one of the answers provided is a very good example of making the user data work hard to improve the service. Interesting enough I haven't many other examples around, so if you have seen something along these lines please also feel free to share it.
Answer
I think that this question is multi-faceted, and that there is no single answer to it. On a basic level, I think it's important to remember that the easiest and safest course of action for the user is to say "no". To get them to say "yes" (and I strongly believe that giving such information needs to be opt-in and not opt-out), there are many things that must be addressed.
In my experience, it is true that many people don't participate in the collection of anonymous usage data. I have worked on different products with different user bases. The levels of participation in each of those are proprietary information and thus I can't share them, but it's fair to say that many people don't participate. There are many different and valid reasons to choose not to participate:
I don't know what data is being gathered and what is being done with it.
I do not understand what the personal benefit is to me to allow this information to be collected.
I am concerned whether I will experience any kind of negative effect as a result of participating in such a program, such as reduced download speeds when the program uploads my data.
If I'm using a corporate device or network, my employer might not allow (or might actively block) such data from being shared. Alternately, I might not be sure whether my employer allows it and I don't see a very good reason for me to research an answer to this question.
I have to decide whether I think that the company/project/whatever that is gathering the information is trustworthy.
Problems 1-3 are the easiest to address, and are the ones that are most amenable to being addressed in an individual application. The fourth problem is somewhat more difficult to address, but is probably addressable with some hard work and creativity. The fifth problem is likely the most difficult to address, and is probably outside the scope of the work that a designer does on an individual application.
If you are going to deploy an application that collects anonymous usage data, then you need to understand which of these items are the ones that are the most impactful for your users. For example, if one were working on enterprise software, companies that want to ensure that none of their proprietary information is shared inadvertently. One method to address this concern might be to work with an individual customer to gather this data for a set period of time, and to give them the opportunity to check the data that is gathered before it is shared with you. Repeating this process with multiple customers will give you a breadth of data.
If you already have an application that collects anonymous usage data and you feel that you can improve the rate of participation, then you need to determine what impacts your low numbers and determine how to address that underlying issue. Also, since this is a setting that is often set when the user first installs or first runs the application, you'll need to determine if you should reconsider their initial decision, and when and how you will accomplish this.
If you already have usage data, there are several things that you can do to determine how representative your data is. For example, you can compare your sales figures to your dataset to determine if a given country is overrepresented in your usage data. To take another example, if you are collecting data about the hardware that your users are using, you could cross-reference it with data from another research method (survey, interviews, etc) to see if there is a discrepancy. There are certainly valid concerns about ensuring that the data that you collect is representative, especially if you are going to make decisions based on it.
While this is somewhat outside the scope of the original question, I feel that it's important to note that one must be careful in the decisions that are made based on anonymous usage data or statistical information. Presuming that the data is representative, all that you know is that something is happening (how frequently a given feature is being used, what the average screen resolution is, etc). You don't know why this is happening. Knowing the underlying reason is often more important than the raw usage or statistical information. A feature not being frequently used could be due to many different factors, including discoverability, usability, poor performance (real or perceived), invalid results, or that it's a feature that simply isn't needed frequently. If you learn via usage data that a feature isn't frequently used, then you need to understand why it isn't being frequently used.
No comments:
Post a Comment