Visual Sensor Networks (VSNs)
Visual sensor networks, also known as camera webs, are becoming ubiquitous with over 30 million surveillance units deployed in the US today producing 4 billion hours of footage per week (Popular Mechanics, Jan. 2007). Equipped primarily with visible-light cameras, such networks capture visual data with excellent spatio-temporal resolution, long range, wide field of view and low latency, and therefore hold a great potential for pervasive wide-area monitoring with applications in surveillance/security, environmental monitoring, urban and highway traffic analysis, etc. However, the processing of this rich source of data is becoming unsustainable due to the sheer amount of data captured. In order that camera webs be effectively used, autonomous (no human operator in the loop) systems need to be developed capable of handling highly-cluttered indoor and outdoor environments.
The goal of this research thrust is to develop statistical models and methods for visual data analysis in a networked setting. Of particular interest is the development of techniques for abnormal behavior detection (such as motor accident, stalled vehicle, abandoned suitcase), event classification, event tracking, etc. Below are listed some ongoing projects.
- Background Subtraction: Extraction of areas of interest, from the standpoint of scene dynamics, in camera’s field of view.
- Behavior Subtraction: Extension of the concept of background subtraction to stationary dynamics.
- Action Recognition: Classification of dynamics occurring in camera’s field of view.
- Coastal Video Surveillance: Detection, classification and summarization of salient events in coastal environments.
- Privacy-Preserving Smart-Room Analytics: Detection, localization and recognition of human activities, actions, gestures, etc. from very low resolution data to preserve user privacy.
- Computational Occupancy Sensing SYstem (COSSY): Estimation of the number of occupants in a commercial venue with the goal of adjusting HVAC operation and saving energy.
The covariance-based framework for action recognition developed by Ph.D. candidate Kai Guo and Profs. Prakash Ishwar and Janusz Konrad has captured two international awards. One of its variants, based on silhouette tunnels was the winner of the “Aerial View Activity Classification Challenge” within the “Semantic Description of Human Activity” contest at the 2010 International Conference on Pattern Recognition (read more), while another one, based on optical flow, received the bast paper award at the 7th IEEE International Conference on Advanced Video and Signal-Based Surveillance (read more).
Prof. Konrad along with Prof. Saligrama and Prof. Jodoin of the University of Sherbrooke, Canada were recently featured in a BU Engineering article “ECE Researchers Devise Improved Video Surveillance Method” discussing their work on video anomaly detection published in September 2010 issue of the IEEE Signal processing Magazine.