Wednesday, April 25, 2018

user expectation - Will forcing response time to the average time as a minimum improve UX?


I have a situation where my Ajax call response time could vary from 14ms to 600ms, because of the complexity of a Query they could make.


From my experience Users take for granted those 20-80 ms round trips and expect to have the same speed all the time and everywhere, in reality in my particular project it is impossible.


So I came up with an idea that I could force users always wait for at least some x amount of time before getting results.


Question is: Should I use an average of my own data of response round trip lengths already gathered or should I just use a publicly agreed highest allowed response time to a Users action?



From what I can remember that was somewhere between 100 - 200 ms I can't remember now.



Answer



You should not artificially delay how long a user must wait. Do not punish a rapid response by slowing them down to an "average". Let all queries complete naturally, for longer query times you may want to consider the following...


Jakob Nielson did some research on wait times back in 1993. From "Response Times: The 3 Important Limits" -



(1) 0.1 second is about the limit for having the user feel that the system is reacting instantaneously, meaning that no special feedback is necessary except to display the result.


(2) 1.0 second is about the limit for the user’s flow of thought to stay uninterrupted, even though the user will notice the delay. Normally, no special feedback is necessary during delays of more than 0.1 but less than 1.0 second, but the user does lose the feeling of operating directly on the data.


(3) 10 seconds is about the limit for keeping the user’s attention focused on the dialogue. For longer delays, users will want to perform other tasks while waiting for the computer to finish, so they should be given feedback indicating when the computer expects to be done.



These numbers would show that your expected average (100-200ms) wait time would generally not require any additional feedback to the user in order for them to perceive it as "instantaneous". Even with wait times up to 1.0 second no special feedback was normally required, even though the user may "lose some feeling of operating directly on the data."



It is important to note that the study, and the above times, are not associated with interactions on the web! These numbers represent raw wait times for user engagement. Don't fall into the "this data is so old, the web was so young" trap! The above isn't "on the web", it is basic "how long will a user wait until they become disengaged" on a task.


What does that mean? At worst it means take the numbers as they are (web users expect faster and faster response these days). At best it means you have a little extra room because users will give you a little extra wiggle room on the web.


In fact, Nielson kept getting questions about "what about on the web" and updated the answer slightly in 2014:



0.1 second: Limit for users feeling that they are directly manipulating objects in the UI. For example, this is the limit from the time the user selects a column in a table until that column should highlight or otherwise give feedback that it's selected. Ideally, this would also be the response time for sorting the column — if so, users would feel that they are sorting the table. (As opposed to feeling that they are ordering the computer to do the sorting for them.)


1 second: Limit for users feeling that they are freely navigating the command space without having to unduly wait for the computer. A delay of 0.2–1.0 seconds does mean that users notice the delay and thus feel the computer is "working" on the command, as opposed to having the command be a direct effect of the users' actions. Example: If sorting a table according to the selected column can't be done in 0.1 seconds, it certainly has to be done in 1 second, or users will feel that the UI is sluggish and will lose the sense of "flow" in performing their task. For delays of more than 1 second, indicate to the user that the computer is working on the problem, for example by changing the shape of the cursor.


10 seconds: Limit for users keeping their attention on the task. Anything slower than 10 seconds needs a percent-done indicator as well as a clearly signposted way for the user to interrupt the operation. Assume that users will need to reorient themselves when they return to the UI after a delay of more than 10 seconds. Delays of longer than 10 seconds are only acceptable during natural breaks in the user's work, for example when switching tasks.



Still, with your expected averages, a user will feel that they are "directly manipulating" the data or will notice the delay but do not necessarily need to be informed as they "feel the computer is 'working' on the command." Only if your queries are taking over (or regularly close to) 1.0 second would a notification be necessary.


But that might not be the whole story. A NYTimes article, "For Impatient Web Users, an Eye Blink Is Just Too Long to Wait" discussed work being done by Google and Microsoft on how long users are willing to wait for a page when "like services" are available.



From the article:



People will visit a Web site less often if it is slower than a close competitor by more than 250 milliseconds (a millisecond is a thousandth of a second).



enter image description here


This doesn't tell you to include a notification or not, but does point out how important response times are to users! If you artificially inflate wait times you'll potentially be pushing users to a competitor!


No comments:

Post a Comment

technique - How credible is wikipedia?

I understand that this question relates more to wikipedia than it does writing but... If I was going to use wikipedia for a source for a res...