The generally accepted answer to the question on Left-handed persons and usability is that rather than asking users which hand they use in order to determine your layout, you should let them choose their own layout (because choice of layout may be independent of dominant hand).
In the context of a touch-screen application, I wonder if it's still true that you shouldn't give the user an option to set their preferred hand. Given that layout shouldn't be set based on hand preference, are there benefits to knowing a user's preferred hand?
For example, is there any evidence to suggest that knowing your user's dominant hand can help you to interpret certain gestures (such as swipes and pinches) more accurately?
Answer
I'd say the "handedness" of a user is only of limited information. Many other factors affect the way a user interacts with the touch screen of a hand held device. You could be lying on your side, or perhaps you put your smartphone down on a table. While a gesture (say a sideways swipe) might have a different curvature when performed with either hand, it will also be different depending on the user's position in relation to the device.
I would also expect the device to react the same way for both my hands. A right-handed two-finger-pinch will probably touch top right and bottom left, while left-handed would be top left and bottom right. I would not expect the device to care. And nothing stops me from doing a left-handed pinch touching top right and bottom left, if that's natural from where I'm sitting.
The goal of adding information to the gesture detection would be to wide the detection for gestures. To catch a gesture that you would otherwise miss if you did not know that the user is left-handed. Or that the device is on a table. My guess, without research, would be that the case for this is actually quite small. Detection of gestures is already quite wide when it comes to things like still detecting a curved swipe. Devices usually only ignore a gesture when it's too close to a different gesture (where that 3 or 4 fingers? was that a tap?).
Looking at this from a different angle, I would also expect the device to act machine-like when it comes to my input. To have clear boundaries for when to detect one gesture or the other, or for when a gesture fails. When a system becomes fuzzier about interpreting my input, for instance when it thinks I might be using my off hand, that also makes it less predictable and more open for errors in that interpretation. When an interface you use a lot is strict and predictable, you can learn what kind of input it understands and in what situations it will fail.
So, that's kind of a long way to say that no, I don't have evidence to support your hypothesis and I doubt it exists, but I could be wrong :)
No comments:
Post a Comment