Skip to Main Content
 

Global Search Box

 
 
 
 

Files

ETD Abstract Container

Abstract Header

Transformational Models for Background Subtraction with Moving Cameras

Zamalieva, Daniya

Abstract Details

2014, Doctor of Philosophy, Ohio State University, Computer Science and Engineering.
Background subtraction is a commonly used technique in computer vision for detecting objects. While there is an extensive literature regarding background subtraction, most of the existing methods assume that the camera is stationary. This assumption limits their applicability for moving camera scenarios. To overcome this limitation, we approach the background subtraction problem from a geometric perspective and introduce multiple background transformation models that utilize various geometric relations. Unlike previous methods, the proposed algorithms do not rely on certain camera motions or assumptions about the scene geometry. First, we introduce a 2.5D background model that describes the scene in terms of both its appearance and geometry. In this model, the 3D scene is represented as a stack of parallel hypothetical planes so that each plane is associated with a homography transform. A pixel that belongs to a background scene structure consistently maps between the consecutive frames based on the corresponding plane projective transformation with respect to the hypothetical plane it lies on. This consistency in mapping is not expected for a moving object, since its motion does not comply with the camera motion. This observation disambiguates moving objects from the background. In our second approach, instead of relying on frame-to-frame estimation, we simultaneously estimate the epipolar geometries induced by a moving camera in a temporally consistent manner for a number of frames using the temporal fundamental matrix (TFM). The TFM is robustly estimated from the tracklets generated by dense point tracking and used to compute the probability of each tracklet belonging to the background. With this formulation, we eliminate the problems of insignificant camera and object motion that subside the distinction between the background and moving objects. In order to ensure the color, spatial, and temporal consistencies of tracklet labeling, we minimize spatiotemporal labeling cost in locality of tracklets. Our final approach employs both homography transform and fundamental matrix, and benefits from both geometric relations while avoiding their drawbacks. It relies on the Bayesian selection of the transformation that best describes the transformation between the pairs of consecutive frames. Based on the selected geometric relation, the learned background and foreground appearance models are propagated using a single or a series of homography transforms. The propagated models are subjected to maximum a posteriori - Markov random field (MAP-MRF) optimization framework that combines motion, appearance, spatial, and temporal cues; the optimization process provides the final background/foreground labels. Extensive experiments with challenging videos show that the proposed methods can successfully detect moving objects in complex scenes and with significant camera motion, and they are comparable and, in most cases, outperform the state-of-the-art.
James Davis (Advisor)
Alper Yilmaz (Committee Member)
172 p.

Recommended Citations

Citations

  • Zamalieva, D. (2014). Transformational Models for Background Subtraction with Moving Cameras [Doctoral dissertation, Ohio State University]. OhioLINK Electronic Theses and Dissertations Center. http://rave.ohiolink.edu/etdc/view?acc_num=osu1408702317

    APA Style (7th edition)

  • Zamalieva, Daniya. Transformational Models for Background Subtraction with Moving Cameras. 2014. Ohio State University, Doctoral dissertation. OhioLINK Electronic Theses and Dissertations Center, http://rave.ohiolink.edu/etdc/view?acc_num=osu1408702317.

    MLA Style (8th edition)

  • Zamalieva, Daniya. "Transformational Models for Background Subtraction with Moving Cameras." Doctoral dissertation, Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1408702317

    Chicago Manual of Style (17th edition)