Visual Sensor Networks (VSNs)
There exist two types of stationary visual sensor networks. Camera networks employ surveillance cameras – 40 million in the US alone (statista.com, 2014). Such networks capture visual data with excellent spatio-temporal resolution, long range, wide field of view and low latency, and therefore hold a great potential for pervasive wide-area monitoring with applications in security, environmental monitoring, traffic analysis, etc. However, the processing of this rich source of information is becoming unsustainable due to the sheer amount of captured data; autonomous systems are needed to assure scalability. However, camera networks may not be suitable for scenarios requiring privacy. In recent years, light-sensing networks have been proposed, where cameras are replaced by extremely low-resolution (even single-pixel) light sensors that do not collect clearly discernible visual traits. Such networks have been proposed for indoor localization, tracking and activity recognition, but due to their low-resolution require novel computational algorithms.
In this research thrust, we develop statistical models and methods for visual data analysis in a networked setting, for example localization and tracking of people indoors, activity recognition, abnormal behavior detection, etc. Below are listed some ongoing and recent projects.
- Background Subtraction: Extraction of areas of interest, from the standpoint of scene dynamics, in camera’s field of view.
- Behavior Subtraction: Extension of the concept of background subtraction to stationary dynamics.
- Action Recognition: Classification of dynamics occurring in camera’s field of view.
- Coastal Video Surveillance: Detection, classification and summarization of salient events in coastal environments.
- Privacy-Preserving Smart-Room Analytics: Detection, localization and recognition of human activities, actions, gestures, etc. from very low resolution data to preserve user privacy.
- Computational Occupancy Sensing SYstem (COSSY): Localization of occupants in a commercial venue with the goal of adjusting HVAC operation and saving energy.
The covariance-based framework for action recognition developed by Ph.D. candidate Kai Guo and Profs. Prakash Ishwar and Janusz Konrad has captured two international awards. One of its variants, based on silhouette tunnels was the winner of the “Aerial View Activity Classification Challenge” within the “Semantic Description of Human Activity” contest at the 2010 International Conference on Pattern Recognition (read more), while another one, based on optical flow, received the bast paper award at the 7th IEEE International Conference on Advanced Video and Signal-Based Surveillance (read more).
Prof. Konrad along with Prof. Saligrama and Prof. Jodoin of the University of Sherbrooke, Canada were featured in a BU Engineering article “ECE Researchers Devise Improved Video Surveillance Method” discussing their work on video anomaly detection published in September 2010 issue of the IEEE Signal processing Magazine.