The segmentation of moving objects in video can be formulated as a background subtraction problem – the detection of change in each image frame. The background scene is learned and modeled. A pixelwise process is employed to determine whether the current pixel is similar or not to the background model. The detection of change in video is challenging due to the non-stationary background such as illumination change, background motions, etc. We propose new features for background modeling. Perception-based local ternary patterns are generated from the same color channels as well as from different color channels. Features computed from the local patterns are stored in the background model as samples. If the current pixel is classified as background, the background model is updated. Finally, we propose a probabilistic refinement to improve each change region by taking into account the spatially consistency of image features. We compare our method with various background subtraction algorithms on some video datasets. Our method can achieve 13% better performance than other methods.