Privacy-Preserving Smart-Room Analytics
While many of our personal items are already “smart” (e.g., smartphones, smart watches), extending this to the infrastructure around us is more challenging. One effort in this direction is research on smart spaces – environments that allow intelligent interaction with their occupants, be it a living or conference room. Among the promised benefits of future smart rooms are improved energy efficiency, sustainable health benefits and increased productivity. For instance, localization of human subjects may enable direct illumination of target areas saving energy in areas void of humans. Recognizing the types of activities may allow task-optimized lighting, e.g., reduced screen glare when working on a laptop. As for productivity, localization of occupants may help maximize throughput rates in visible light communication (VLC) between fixed transceivers (ceiling, walls) and mobile devices (smartphones, tablets, laptops), also known as LiFi. Finally, hand gestures can be used to control various room conditions (e.g., temperature, light). Realizing these benefits, requires, among other things, reliable detection, localization and recognition of human actions and activities.
Although extensive research has been performed to date on localization and recognition of human actions, most of the work exploits video cameras. However, with the increasing concern about privacy, standard video cameras seem unsuitable for smart spaces of the future. Concerns about privacy can be partially addressed by significantly reducing the camera resolution. This, however, degrades recognition accuracy. While this can be mitigated by using multiple sensors it is unclear to what extent, thus motivating a study of tradeoffs between camera resolution and action recognition accuracy. A better understanding of these tradeoffs would potentially help in the development of privacy-preserving smart rooms and thus would facilitate intelligent interaction with its occupants while mitigating privacy concerns.
Some of the projects related to privacy issues in smart spaces of the future that we have been working on are:
- Privacy-Preserving Action Recognition: recognition of occupant’s action at very low spatial and temporal resolutions.
- Privacy-Preserving Occupant Localization: localization of occupants using single-pixel RGB sensors.
- Privacy-Preserving Action Recognition using Deep ConvNets: action recognition from extremely low resolution video using deep learning.