Thursday, November 5, 2015

user behavior - Should personal data be asked at the beginning or end of an online survey


Given a 10-step online survey (mostly yes/no radio buttons), where is it best to ask for the users' personal data e.g. name, age, contact details?


My intuition says putting it at the end will yield better completion rates, as users have already invested time in filling out the "easier" and less personally involved part.



What factors should be considered? Are there reasons to ask for users' personal data at the beginning instead?


Should placement be different if the users' personal data input is optional?



Answer



I have read up on this a bit, and it seems that my answer will contradict some of the things that have already been mentioned. My sources are all academic, and as such reflect the use of on-line surveys for conducting experiments. Feel free to read the sources that I link to, and draw your own conclusions. I mention some peripheral work as it relates to on-line surveys in general, and then focus on personal information and incentives.


The first thing worth noting, is that people respond differently to on-line surveys than they do to off-line/paper-based surveys/interviews. This is important when you start to read up on other survey research. For a detailed list of quality criteria that should be taken into account when designing on-line surveys (including when to ask for personal information), have a look at the paper by Andrews et al. titled "Electronic Survey Methodology: A Case Study in Reaching Hard-to-Involve Internet Users".


You mentioned in a comment that you want to use the personal information in a lottery for prizes. This is important, and I'll factor that into my answer (asking for personal information usually goes hand-in-hand with offering incentives).


I assume you are sincerely interested in the honest feedback of the respondents (otherwise, why ask them questions in the first place?). I also assume that you are more interested in "quality" responses rather than "quantity". The respondents, may not be as serious or trustworthy as you believe. They may start the survey just to "see what it is about", then drop out after a few questions, without contributing in a meaningful way. For a detailed classification of participant's response (and non-response) patterns (the different "types" of respondents), see the 2001 paper "Classifying Response Behaviors in Web-based Surveys" by Bosnjak et al.


The opposite is also true. Honest people may not believe that you (or your company) have trustworthy intentions ("will the lottery really take place?"), and as such may refuse to partake in the survey, or identify themselves. See the 2001 paper by O'Neil "Analysis of Internet Users’ Level of Online Privacy Concerns" for a general overview of how people started fearing how their information will be used on-line in the late '90s.


So, the challenge is to find people whose responses you can trust, and who trust you. You would like to control the drop-out rate, and provide them with an incentive to complete the survey honestly.


The 2002 paper by Reips "Standards for Internet-based Experimenting" mentions three simple techniques available to reduce drop-out (assuming a one-item-per-page survey design, as opposed to a single-page survey):




In the high-hurdle technique, motivationally adverse factors are announced or concentrated as close to the beginning of the Web experiment as possible. On the following pages, the concentration and impact of these factors should be reduced continuously. As a result, the highest likelihood for dropout resulting from these factors will be at the beginning of the Web experiment.



Summary: Asking for personal information at the start may filter out people who are not serious about completing the survey (assuming they value their personal information and do not trust what you will do with it - i.e. they are not motivated to provide it).



A second precaution that can be taken to reduce dropout is asking for the degree of seriousness of a participant's involvement or for a probability estimate that one will complete the whole experiment.



Summary: You can gauge the seriousness of the participant by challenging them on it, by asking directly.



The warm-up technique is based on the observation that most dropout will take place at the beginning of an online study, forming a "natural dropout curve"... A main reason for the initial dropout is the short orientation period many participants show before making a final decision on their participation... To keep dropout low during the experimental phase, as defined by the occurrence of the experimental manipulation, it is wise to place its beginning several Web pages deep into the study. The warm-up period can be used for practice trials, piloting of similar materials or buildup of behavioral routines, and assurance that participants are complying with instructions.




Summary: Ask the important non-demographic questions later on, not at the start.


How does this tie into your question about asking personal information? There is some interesting (perhaps counter-intuitive) research on where to ask for it. The generally accepted place to ask for personal information is at the start of the survey. People who refuse to provide it, drop out. This is the high-hurdle technique in action. Frick et al. describes in their 2001 article "Financial incentives, personal information and drop-out rate in online studies" that people who provide personal information at the start of the survey are more inclined to complete it (they can be considered "more serious"), whereas those who are only (suddenly?) requested to do so at the end, drop out.


There are negative side-effects of drop-outs, in that it may negatively affect the demographic makeup of the sample of participants. This negative effect of drop-out due to asking for personal information was confirmed by O'Neil et al. in the 2003 paper "Web-based research: Methodological variables' effects on dropout and sample characteristics".


An often-cited article "A meta-analysis of response rates in Web-or Internet-based surveys" by Cook et al. goes into much more detail on the pitfalls of sampling.


When it comes to incentives, a very technical 2006 article by Goritz titled "Incentives in Web Studies: Methodological Issues and a Review" examines if and when incentives are effective in on-line studies (including looking at different kinds of incentives). It covers a lot of ground, but she confirms:



"... material incentives increase the odds of a person responding by 19% over the odds without incentives"



As an aside, there are some points with regards to response bias and survey usability that may interest you (example: which UI controls may bias participant responses), mentioned in the 2004 paper "Human Research and Data Collection via the Internet" by Birnbaum. I would encourage looking for additional sources on survey usability though, as that was not the exclusive focus of the paper.



For a detailed treatment of designing questionnaires, the book "Questionnaire Design, Interviewing and Attitude Measurement" by Oppenheim is often cited, but there are many others available.


No comments:

Post a Comment

technique - How credible is wikipedia?

I understand that this question relates more to wikipedia than it does writing but... If I was going to use wikipedia for a source for a res...