Alphanumeric passwords have been in use since the dawn of computers. We need to remember more and more of them, and they are becoming more and more complex. Furthermore, to make them truly secure, we need to change them quite often. Although biometrics based on fingerprints, iris scans, or facial features offer a tempting alternative, they are not renewable: once compromised they are inconvenient if not difficult to replace (one could use a finger cap with a new fingerprint, however putting one on for each authentication would be very inconvenient).
In our ongoing research we are exploring a promising alternative that combines the convenience of biometrics together with the security and renewability of alphanumeric passwords. The key idea is to use gestures as passwords for authentication: a user performs his/her favorite “move” to unlock a device, door, etc. (see the video below).
In our work thus far, we have captured gestures with multiple Kinect v1 cameras and used both silhouettes, derived from the depth field, and skeletons, provided by the Microsoft Kinect SDK, for developing gesture-based user-authentication algorithms. A gesture consists of three elements: body build, initial-posture, and dynamics. While the first two provide biometric information (characteristics of a user’s body shape), and thus are non-renewable, the dynamics is renewable as it is under the user’s control. Similar to a signature, a compromised gesture can be easily changed.
The results that we have obtained to-date are quite promising. In a typical scenario, we have obtained an equal error rate (EER) of less than 1%; at most 1 out of 100 authorized users is rejected, while no more than 1 out of 100 imposters is admitted. This performance deteriorates if users rarely perform the gesture (impact of memory) or change appearance (heavy coats, backpacks, etc.). Surprisingly, however, it seems that untrained imposters cannot easily replicate a user’s gesture, even if presented with examples.
Projects on Gesture-Based Authentication
- Multiple viewpoints: More data is better!
- Silhouettes versus skeletons: Can less data be as good yet more robust?
- Gestures disected: The value of posture, build, and dynamics