Visual Sensor Networks (VSNs)
There are over 70 million surveillance cameras in the US today (about 4.6 persons per camera). Primarily installed outdoors, such camera networks capture visual data with excellent spatio-temporal resolution and long range thus holding a great potential for pervasive wide-area monitoring with applications in security, environmental monitoring, traffic analysis, etc. However, the processing of this rich source of information is unsustainable due to the sheer amount of captured data; autonomous systems are needed to assure scalability. Projects below address some of the challenges in developing such systems:
- Background Subtraction: Extraction of areas of interest, from the standpoint of scene dynamics, in camera’s field of view.
- Behavior Subtraction: Extension of the concept of background subtraction to stationary dynamics.
- Action Recognition: Classification of dynamics occurring in camera’s field of view.
- Coastal Video Surveillance: Detection, classification and summarization of salient events in coastal environments.
While typically there is no expectation of privacy outdoors, surveillance cameras are frequently installed indoors for access monitoring, space management or helping first repsonders in case of emergencies. However, in some indoor scenarios occupant privacy is expected (e.g., bathrooms, changing rooms, certain offices and meeting rooms where sensitive information might be shown) so cameras need to be complemented with other sensing modalities. We are currently engaged in a project funded by the Advanced Research Projects Agency (ARPA-E) to develop a dual-modality, people-counting system for commercial buildings:
- Computational Occupancy Sensing SYstem (COSSY): Localization of occupants in a commercial venue with the goal of adjusting HVAC operation and saving energy.
In scenarios requiring strict occupant privacy, standard surveillance cameras cannot be used directly. In the last few years, we have developed networks of extremely low-resolution (even single-pixel) light sensors that do not collect clearly discernible visual traits. Along with the hardware, we have also developed novel algorithms for indoor localization, tracking and activity recognition using data collected by such networks:
- Privacy-Preserving Smart-Room Analytics: Detection, localization and recognition of human activities, actions, gestures, etc. from very low resolution data to preserve user privacy.
Accomplishments: The covariance-based framework for action recognition developed by Ph.D. candidate Kai Guo and Profs. Prakash Ishwar and Janusz Konrad has captured two international awards. One of its variants, based on silhouette tunnels was the winner of the “Aerial View Activity Classification Challenge” within the “Semantic Description of Human Activity” contest at the 2010 International Conference on Pattern Recognition (read more), while another one, based on optical flow, received the bast paper award at the 7th IEEE International Conference on Advanced Video and Signal-Based Surveillance (read more).
Prof. Konrad along with Prof. Saligrama and Prof. Jodoin of the University of Sherbrooke, Canada were featured in a BU Engineering article “ECE Researchers Devise Improved Video Surveillance Method” discussing their work on video anomaly detection published in September 2010 issue of the IEEE Signal Processing Magazine.