I am developing a multitouch UI that takes input from a depthcamera (Microsfot Kinect). In other words, there is no physical "screen" for a user to touch. In addition to the default multitouch gestures, I will implement custom user-defined gestures. For these, I have the option of listening at application start or creating a listener after hovering over a user-defined screen area. Which do you recommend and why?
Answer
I'm not sure whether you are asking how to treat custom gesture defining/recording or custom gesture enabling after they've been defined, so I'll cover both.
Recording custom gestures
The user needs clear instructions & notifications about how to start/end recording ("do this to start and that to end recording") and when it's taking place ("please perform your custom gesture now"). Users will be confused about the process if you start recording automatically without any warnings or instructions.
In addition, it wouldn't hurt to show the recorded gesture on the screen either real-time or as a confirmation, just make sure that the direction of the movements is the mirror image of what the camera sees (i.e. the way the user sees it in their mind).
Enabling custom gestures
Any custom controls & commands defined by the user should be available at application start. Users customize interfaces expecting their customizations (keyboard shortcuts or mouse gestures, too) to be available at any applicable time of their interaction with the software. A dedicated on/off toggle just adds an unnecessary step to the process without adding any value or clarifying any ambiguity to the user.
No comments:
Post a Comment