The detection of changes in video is a fundamental task with long history in computer vision research. A key methodology employed is background subtraction with two challenges: (1) robust modeling of a template background in various acquisition scenarios (changing illumination, rain, snow, etc.), and (2) reliable inference using the template background and current video data. There are two approaches to background subtraction: model-based and data-driven. In model-based approaches, the template background can be computed by means of deterministic operators, such as a temporal mean or median, or by statistical modeling, such as a mixture of Gaussians or kernel density estimation (KDE). This is followed by inference, that may vary from simple thresholding to binary hypothesis test with probabilistic priors. Recently, data-driven background-subtraction methods leveraging deep learning have outperformed model-based methods. Our BSUV-Net family of approaches is currently the top-performing supervised change-detection methodology for unseen videos as evaluated on the CDNet-2014 dataset at changedetection.net. Below are descibed some of our recent projects on background subtraction.
- Foreground-Adaptive Background Subtraction: Explicit MRF modeling of changed areas within binary hypothesis test criterion.
- Background Subtraction with FDR Control: Adoption of False Discovery Rate (FDR) control algorithm to background subtraction.
- BSUV-Net (Background Subtraction for Unseen Videos): CNN-based change detection at multiple time scales using scene semantics.
- BSUV-Net 2.0: BSUV-Net with novel temporal data augmentations and real-time performance.