# Learning Spatially Regularized Correlation Filters for Visual Tracking

###### Abstract

Robust and accurate visual tracking is one of the most challenging computer vision problems. Due to the inherent lack of training data, a robust approach for constructing a target appearance model is crucial. Recently, discriminatively learned correlation filters (DCF) have been successfully applied to address this problem for tracking. These methods utilize a periodic assumption of the training samples to efficiently learn a classifier on all patches in the target neighborhood. However, the periodic assumption also introduces unwanted boundary effects, which severely degrade the quality of the tracking model.

We propose Spatially Regularized Discriminative Correlation Filters (SRDCF) for tracking. A spatial regularization component is introduced in the learning to penalize correlation filter coefficients depending on their spatial location. Our SRDCF formulation allows the correlation filters to be learned on a significantly larger set of negative training samples, without corrupting the positive samples. We further propose an optimization strategy, based on the iterative Gauss-Seidel method, for efficient online learning of our SRDCF. Experiments are performed on four benchmark datasets: OTB-2013, ALOV++, OTB-2015, and VOT2014. Our approach achieves state-of-the-art results on all four datasets. On OTB-2013 and OTB-2015, we obtain an absolute gain of and respectively, in mean overlap precision, compared to the best existing trackers.

## 1 Introduction

Visual tracking is a classical computer vision problem with many applications. In generic tracking the task is to estimate the trajectory of a target in an image sequence, given only its initial location. This problem is especially challenging. The tracker must generalize the target appearance from a very limited set of training samples to achieve robustness against, \egocclusions, fast motion and deformations. Here, we investigate the key problem of learning a robust appearance model under these conditions.

Recently, Discriminative Correlation Filter (DCF) based approaches [5, 8, 10, 19, 20, 24] have successfully been applied to the tracking problem [23]. These methods learn a correlation filter from a set of training samples. The correlation filter is trained to perform a circular sliding window operation on the training samples. This corresponds to assuming a periodic extension of these samples (see figure 1). The periodic assumption enables efficient training and detection by utilizing the Fast Fourier Transform (FFT).

As discussed above, the computational efficiency of the standard DCF originates from the periodic assumption at both training and detection. However, this underlying assumption produces unwanted boundary effects. This leads to an inaccurate representation of the image content, since the training patches contain periodic repetitions. The induced boundary effects mainly limit the standard DCF formulation in two important aspects. Firstly, inaccurate negative training patches reduce the discriminative power of the learned model. Secondly, the detection scores are only accurate near the center of the region, while the remaining scores are heavily influenced by the periodic repetitions of the detection sample. This leads to a very restricted target search region at the detection step.

The aforementioned limitations of the standard DCF formulation hamper the tracking performance in several ways. (a) The DCF based trackers struggle in cases with fast target motion due to the restricted search region. (b) The lack of negative training patches leads to over-fitting of the learned model, significantly affecting the performance in cases with \egtarget deformations. (c) The mentioned limitations in training and detection also reduce the potential of the tracker to re-detect the target after an occlusion. (d) A naive expansion of the image area used for training the correlation filter corresponds to using a larger periodicity (see figure 1). Such an expansion results in an inclusion of substantial amount of background information within the positive training samples. These corrupted training samples severely degrade the discriminative power of the model, leading to inferior tracking results. In this work, we tackle these inherent problems by re-visiting the standard DCF formulation.

### 1.1 Contributions

In this paper, we propose Spatially Regularized Discriminative Correlation Filters (SRDCF) for tracking. We introduce a spatial regularization component within the DCF formulation, to address the problems induced by the periodic assumption. The proposed regularization weights penalize the correlation filter coefficients during learning. The spatial weights are based on the a priori information about the spatial extent of the filter. Due to the spatial regularization, the correlation filter can be learned on larger image regions. This enables a larger set of negative patches to be included in the training, leading to a more discriminative model.

Due to the online nature of the tracking problem, a computationally efficient learning scheme is crucial. Therefore, we introduce a suitable optimization strategy for the proposed SRDCF. The online capability is achieved by exploiting the sparsity of the spatial regularization function in the Fourier domain. We propose to apply the iterative Gauss-Seidel method to solve the resulting normal equations. Additionally, we introduce a strategy to maximize the detection scores with sub-grid precision.

We perform comprehensive experiments on four benchmark datasets: OTB-2013 [33] with 50 videos, ALOV++ [30] with 314 videos, VOT2014 [23] with 25 videos and OTB-2015 [34] with 100 videos. Compared to the best existing trackers, our approach obtains an absolute gain of and on OTB-2013 and OTB-2015 respectively, in mean overlap precision. Our method also achieves the best overall results on ALOV++ and VOT2014. Additionally, our tracker won the OpenCV State of the Art Vision Challenge in tracking [25] (there termed DCFSIR).

## 2 Discriminative Correlation Filters

Discriminative correlation filters (DCF) is a supervised technique for learning a linear classifier or a linear regressor. The main difference from other techniques, such as support vector machines [6], is that the DCF formulation exploits the properties of circular correlation for efficient training and detection. In recent years, the DCF based approaches have been successfully applied for tracking. Bolme \etal[5] first introduced the MOSSE tracker, using only grayscale samples to train the filter. Recent work [9, 8, 10, 20, 24] have shown a notable improvement by learning multi-channel filters on multi-dimensional features, such as HOG [7] or Color-Names [31]. However, to become computationally viable, these approaches rely on harsh approximations of the standard DCF formulation, leading to sub-optimal learning. Other work have investigated offline learning of multi-channel DCF:s for object detection [13, 18] and recognition [4]. But these methods are too computationally costly for online tracking applications.

The circular correlation within the DCF formulation has two major advantages. Firstly, the DCF is able to make extensive use of limited training data by implicitly including all shifted versions of the given samples. Secondly, the computational effort for training and detection is significantly reduced by performing the necessary computations in the Fourier domain and using the Fast Fourier Transform (FFT). These two advantages make DCF:s especially suitable for tracking, where training data is scarce and computational efficiency is crucial for real-time applications.

By employing a circular correlation, the standard DCF formulation relies on a periodic assumption of the training and detection samples. However, this assumption produces unwanted boundary effects, leading to an inaccurate description of the image. These inaccurate training patches severely hamper the learning of a discriminative tracking model. Surprisingly, this problem has been largely ignored by the tracking community. Galoogahi \etal[14] investigate the boundary effect problem for single-channel DCF:s. Their approach solve a constrained optimization problem, using the Alternating Direction Method of Multipliers (ADMM), to ensure a correct filter size. This however requires a transition between the spatial and Fourier domain in each ADMM iteration, leading to an increased computational complexity. Different to [14], we propose a spatial regularization component in the objective. By exploiting the sparsity of our regularizer, we efficiently optimize the filter directly in the Fourier domain. Contrary to [14], we target the problem of multi-dimensional features, such as HOG, crucial for the overall tracking performance [10, 20].

### 2.1 Standard DCF Training and Detection

In the DCF formulation, the aim is to learn a multi-channel convolution^{1}^{1}1We use convolution for mathematical convenience, though correlation can equivalently be used. filter from a set of training examples . Each training sample consists of a -dimensional feature map extracted from an image region. All samples are assumed to have the same spatial size . At each spatial location we thus have a -dimensional feature vector . We denote feature layer of by . The desired output is a scalar valued function over the domain , which includes a label for each location in the sample .

The desired filter consists of one convolution filter per feature layer. The convolution response of the filter on a sample is given by

(1) |

Here, denotes circular convolution. The filter is obtained by minimizing the -error between the responses on the training samples , and the labels ,

(2) |

Here, the weights determine the impact of each training sample and is the weight of the regularization term. Eq. 2 is a linear least squares problem. Using Parseval’s formula, it can be transformed to the Fourier domain, where the resulting normal equations have a block diagonal structure. The Discrete Fourier Transformed (DFT) filters can then be obtained by solving number of linear equation systems [13].

For efficiency reasons, the learned DCF is typically applied in a sliding-window-like manner by evaluating the classification scores on all cyclic shifts of a test sample. Let denote the feature map extracted from an image region. The classification scores at all locations in this image region can be computed using the convolution property of the DFT,

(3) |

Here, denotes point-wise multiplication, the hat denotes the DFT of a function and denotes the inverse DFT. The FFT hence allows the detection scores to be computed in complexity instead of .

## 3 Spatially Regularized Correlation Filters

We propose to use a spatial regularization component in the standard DCF formulation. The resulting optimization problem is solved in the Fourier domain, by exploiting the sparse nature of the proposed regularization.

### 3.1 Spatial Regularization

To alleviate the problems induced by the circular convolution in (1), we replace the regularization term in (2) with a more general Tikhonov regularization. We introduce a spatial weight function used to penalize the magnitude of the filter coefficients in the learning. The regularization weights determine the importance of the filter coefficients , depending on their spatial locations. Coefficients in residing outside the target region are suppressed by assigning higher weights in and vice versa. The resulting optimization problem is expressed as,

(4) |

The regularization weights in (4) are visualized in figure 2. Visual features close to the target edge are often less reliable than those close to the target center, due to \egtarget rotations and occlusions. We therefore let the regularization weights change smoothly from the target region to the background. This also increases the sparsity of in the Fourier domain. Note that (4) simplifies to the standard DCF (2) for uniform weights .

By applying Parseval’s theorem to (4), the filter can equivalently be obtained by minimizing the resulting loss function (5) over the DFT coefficients ,

(5) |

The second term in (5) follows from the convolution property of the inverse DFT. A vectorization of (5) gives,

(6) |

Here, bold letters denote a vectorization of the corresponding scalar valued functions and denotes the diagonal matrix with the elements of the vector in its diagonal. The matrix represents circular 2D-convolution with the function , \ie. Each row in thus contains a cyclic permutation of .

The DFT of a real-valued function is known to be Hermitian symmetric. Therefore, minimizing (4) over the set of real-valued filters , corresponds to minimizing (5) over the set of Hermitian symmetric DFT coefficients . We reformulate (6) to an equivalent real-valued optimization problem, to ensure faster convergence by preserving the Hermitian symmetry. Let be the point-reflection

(7) |

such that is real-valued by the Hermitian symmetry of . Here, denotes the imaginary unit. Eq. 7 can be expressed by a unitary matrix such that . By (7), contains at most two non-zero entries in each row.

The reformulated variables from (6) are defined as , and , where denotes the conjugate transpose of a matrix. Since is unitary, (6) can equivalently be expressed as,

(8) |

All variables in (8) are real-valued. The loss function (8) is then simplified by defining the fully vectorized real-valued filter as the concatenation ,

(9) |

Here we have defined the concatenation and to be the block diagonal matrix with each diagonal block being equal to . Finally, (9) is minimized by solving the normal equations , where

(10a) | ||||

(10b) |

Here, (10) defines a real linear system of equations. The fraction of non-zero elements in is smaller than , where is the number of non-zero Fourier coefficients in . Thus, is sparse if has a sparse spectrum. The DFT coefficients for the filters are obtained by solving the system (10) and applying .

Figure 3 visualizes the filter learned by optimizing the standard DCF loss (2) and the proposed formulation (4), using the spatial regularization weights in figure 2. In the standard DCF, large values are spatially distributed over the whole filter. By penalizing filter coefficients corresponding to background, our approach learns a classifier that emphasizes visual information within the target region.

A direct application of a sparse solver to the normal equations is computationally very demanding (even when the standard regularization is used and the number of features is small ()). Next, we propose an efficient optimization scheme to solve the normal equations for online learning scenarios, such as tracking.

### 3.2 Optimization

For the standard DCF formulation (2) the normal equations have a block diagonal structure [13]. However, this block structure is not attainable in our case due to the structure of the regularization matrix in (10a). We propose an iterative approach, based on the Gauss-Seidel, for efficient online computation of the filter coefficients.

The Gauss-Seidel method decomposes the matrix into a lower triangular part and a strictly upper triangular part such that . The algorithm then proceeds by solving the following triangular system for in each iteration ,

(11) |

This lower triangular equation system is solved efficiently using forward substitution and by exploiting the sparsity of and . The Gauss-Seidel recursion (11) converges to the solution of whenever the matrix is symmetric and positive definite. The construction of the weights (see section 5.1) ensures that both conditions are satisfied.

## 4 Our Tracking Framework

Here, we describe our tracking framework, based on the Spatially Regularized Discriminative Correlation Filters (SRDCF) proposed in section 3.

### 4.1 Training

At the training stage, the model is updated by first extracting a new training sample centered at the target location. Here, denotes the current frame number. We then update and in (10) with a learning rate ,

(12a) | ||||

(12b) |

This corresponds to using exponentially decaying weights in the loss function (4). In the first frame, we set and . Note that the regularization matrix can be precomputed once for the entire sequence. The update strategy (12) ensures memory efficiency, since it does not require storage of all samples . After the model update (12), we perform a fixed number of Gauss-Seidel iterations (11) per frame to compute the new filter coefficients.

For the initial iteration in frame , we use the filter computed in the previous frame, \ie. In the first frame, the initial estimate is obtained by solving the linear system,

(13) |

for . This provides a starting point for the Gauss-Seidel optimization in the first frame. The systems in (13) share the same sparse coefficients and can be solved efficiently with a direct sparse solver.

### 4.2 Detection

At the detection stage, the location of the target in a new frame is estimated by applying the filter that has been updated in the previous frame. Similar to [24], we apply the filter at multiple resolutions to estimate changes in the target size. The samples are extracted centered at the previous target location and at the scales relative to the current target scale. Here, denotes the number of scales and is the scale increment factor. The sample is constructed by resizing the image according to before the feature computation.

Fast Sub-grid Detection: Generally, the training and detection samples and are constructed using a grid strategy with a stride greater than one pixel. This leads to only computing the detection scores (3) on a coarser grid. We employ an interpolation approach that allows computation of pixel-dense detection scores. The detection scores (3) are efficiently interpolated with trigonometric polynomials by utilizing the computed DFT coefficients. Let be the DFT of the detection scores evaluated at the sample . The detection scores at the continuous locations in are interpolated as,

(14) |

Here, denotes the imaginary unit. We aim to find the sub-grid location that corresponds to the maximum score: . The scores are first evaluated at all grid locations using (3). The location of the maximal score is used as the initial estimate. We then iteratively maximize (14) using Newton’s method, by starting at the location . The gradient and Hessian in each iteration are computed by analytically differentiating (14). We found that only a few iterations is sufficient for convergence.

We apply the sub-grid interpolation strategy to maximize the classification scores computed at the sample . The procedure is applied for each scale level independently. The scale level with the highest maximal detection score is then used to update target location and scale.

Excluding the feature extraction, the total computational complexity of our tracker sums up to . Here, denotes the number of iterations in the sub-grid detection. In our case, the expression is dominated by the last term, which originates from the filter optimization.

## 5 Experiments

Here, we present a comprehensive evaluation of the proposed method. Result are reported on four benchmark datasets: OTB-2013, OTB-2015, ALOV++ and VOT2014.

### 5.1 Details and Parameters

The weight function is constructed by starting from a quadratic function with the minimum located at the sample center. Here denotes the target size, while and are parameters. The minimum value of w is set to and the impact of the regularizer is set to . In practice, only a few DFT coefficients in the resulting function have a significant magnitude. We simply remove all DFT coefficients smaller than a threshold to ensure a sparse spectrum , containing about 10 non-zero coefficients. Figure 1 visualizes the resulting weight function used in the optimization.

Similar to recent DCF based trackers [8, 20, 24], we also employ HOG features, using a cell size of pixels. Samples are represented by a square grid of cells (\ie), such that the corresponding image area is proportional to the area of the target bounding box. We set the image region area of the samples to times the target area and set the initial scale to ensure a maximum sample size of cells. Samples are multiplied by a Hann window [5].
We set the label function to a sampled Gaussian with a standard deviation proportional to the target size [8, 19]. The learning rate is set to and we use Gauss-Seidel iterations. All parameters remain fixed for all videos and datasets. Our Matlab implementation^{2}^{2}2Available at http://www.cvl.isy.liu.se/research/objrec/visualtracking/regvistrack/index.html. runs at 5 frames per second on a standard desktop computer.

### 5.2 Baseline Comparison

Here, we evaluate the impact of the proposed spatial regularization component and compare it with the standard DCF formulation. First, we investigate the consequence of simply replacing the proposed regularizer with the standard DCF regularization in (2), without altering any parameters. This corresponds to using uniform regularization weights , in our framework. We set following [8, 10, 19]. For a fair comparison, we also evaluate both our and the standard regularization using a smaller sample size relative to the target, by setting the size as in [8, 10, 19].

Table 1 shows the mean overlap precision (OP) for the four methods on the OTB-2013 dataset. The OP is computed as the fraction of frames in the sequence where the intersection-over-union overlap with the ground truth exceeds a threshold of (PASCAL criterion). The standard DCF benefits from using smaller samples to avoid corrupting the positive training samples with background information. On the other hand, the proposed spatial regularization enables an expansion of the image region used for training the filter, without corrupting the target model. This leads to a more discriminative model, resulting in a gain of in mean OP compared to the standard DCF formulation.

Additionally, we compare our method with Correlation Filters with Limited Boundaries (CFLB) [14]. For a fair comparison, we use the same settings as in [14] for our approach: single grayscale channel, no scale estimation, no sub-grid detection and the same sample size. On the OTB-2013, the CFLB achieves a mean OP of . Whereas the mentioned baseline version of our tracker obtains a mean OP of , outperforming [14] by .

Conventional sample size | Expanded sample size | |||

Regularization | Standard | Ours | Standard | Ours |

Mean OP () | 71.1 | 72.2 | 50.1 | \textcolorred78.1 |

### 5.3 OTB-2013 Dataset

LSHT | ASLA | Struck | ACT | TGPR | KCF | DSST | SAMF | MEEM | SRDCF | |

OTB-2013 | 47.0 | 56.4 | 58.8 | 52.6 | 62.6 | 62.3 | 67 | 69.7 | \textcolorblue70.1 | \textcolorred78.1 |

OTB-2015 | 40.0 | 49.0 | 52.9 | 49.6 | 54 | 54.9 | 60.6 | \textcolorblue64.7 | 63.4 | \textcolorred72.9 |

We provide a comparison of our tracker with 24 state-of-the-art methods from the literature: MIL [2], IVT [28], CT [36], TLD [22], DFT [29], EDFT [12], ASLA [21], L1APG [3], CSK [19], SCM [37], LOT [26], CPF [27], CXT [11], Frag [1], Struck [16], LSHT [17], LSST [32], ACT [10], KCF [20], CFLB [14], DSST [8], SAMF [24], TGPR [15] and MEEM [35].

#### 5.3.1 State-of-the-art Comparison

Table 2 shows a comparison with state-of-the-art methods on the OTB-2013 dataset, using mean overlap precision (OP) over all 50 videos. Only the results for the top 10 trackers are reported. The MEEM tracker, based on an online SVM, provides the second best results with a mean OP of . The best result on this dataset is obtained by our tracker with a mean OP of , leading to a significant gain of compared to MEEM.

Figure (a)a shows the success plot over all the 50 videos in OTB-2013. The success plot shows the mean overlap precision (OP), plotted over the range of intersection-over-union thresholds. The trackers are ranked using the *area under the curve* (AUC), displayed in the legend.
Among previous DCF based trackers, DSST and SAMF provides the best performance, with an AUC score of and . Our approach obtains an AUC score of and significantly outperforms the best existing tracker (SAMF) by .

#### 5.3.2 Robustness to Initialization

Visual tracking methods are known to be sensitive to initialization. We evaluate the robustness of our tracker by following the protocol proposed in [33]. Two different types of initialization criteria, namely: temporal robustness (TRE) and spatial robustness (SRE), are evaluated. The SRE corresponds to tracker initialization at different positions close to the ground-truth in the first frame. The procedure is repeated with 12 different initializations for each video in the dataset. The TRE criteria evaluates the tracker by initializations at 20 different frames, with the ground-truth.

*Soccer*,

*Human6*and

*Tiger2*videos. Our approach provides consistent results in challenging scenarios, such as occlusions, fast motion, background clutter and target rotations.

Figure 5 shows the success plots for TRE and SRE on the OTB-2013 dataset with 50 videos. We include all the top 7 trackers in figure (a)a for this experiment. Among the existing methods, SAMF and MEEM provide the best results. Our SRDCF achieves a consistent gain in performance over these trackers on both robustness evaluations.

#### 5.3.3 Attribute Based Comparison

We perform an attribute based analysis of our approach on the OTB-2013 dataset. All the 50 videos in OTB-2013 are annotated with 11 different attributes, namely: illumination variation, scale variation, occlusion, deformation, motion blur, fast motion, in-plane rotation, out-of-plane rotation, out-of-view, background clutter and low resolution. Our approach outperforms existing trackers on 10 attributes.

Figure 7 shows example success plots of four different attributes. Only the top 10 trackers in each plot are displayed for clarity. In case of out-of-plane rotations, (MEEM) achieves an AUC score of . Our tracker provides a gain of compared to MEEM. Among the existing methods, the two DCF based trackers DSST and SAMF provide the best results in case of scale variation. Both these trackers are designed to handle scale variations. Our approach achieves a significant gain of over DSST. Note that the standard DCF trackers struggle in the cases of motion blur and fast motion due to the restricted search area. This is caused by the induced boundary effects in the detection samples of the standard DCF trackers. Our approach significantly improves the performance compared to the standard DCF based trackers in these cases. Figure 6 shows a qualitative comparison of our approach with existing methods on challenging example videos. Despite no explicit occlusion handling component, our tracker performs favorably in cases with occlusion.

### 5.4 OTB-2015 Dataset

We provide a comparison of our approach on the recently introduced OTB-2015. The dataset extends OTB-2013 and contains 100 videos. Table 2 shows the comparison with the top 10 methods, using mean overlap precision (OP) over all 100 videos. Among the existing methods, SAMF and MEEM provide the best results with mean OP of and respectively. Our tracker outperforms the best existing tracker by in mean OP.

Figure (b)b shows the success plot over all the 100 videos. Among the standard DCF trackers, SAMF provides the best results with an AUC score of . The MEEM tracker achieves an AUC score of . Our tracker obtains an AUC score of , outperforming SAMF by .

### 5.5 ALOV++ Dataset

We also perform experiments on the ALOV++ dataset [30], containing 314 videos with 89364 frames in total. The evaluation protocol employs survival curves based on F-score, where a higher F-score indicates better performance. The survival curve is constructed by plotting the sorted F-scores of all 314 videos. We refer to [30] for details.

Our approach is compared with the 19 trackers evaluated in [30]. We also add the top 5 methods from our OTB comparison. Figure 8 shows the survival curves and the average F-scores of the trackers. MEEM obtains a mean F-score of . Our approach obtains the best overall performance compared to 24 trackers with a mean F-score of .

### 5.6 VOT2014 Dataset

Finally, we present results on VOT2014 [23]. Our approach is compared with the 38 participating trackers in the challenge. We also add MEEM in the comparison. In VOT2014, the trackers are evaluated both in terms of accuracy and robustness. The accuracy score is based on the overlap with ground truth, while the robustness is determined by the failure rate. The trackers are restarted at each failure. The final rank is based on the accuracy and robustness in each video. We refer to [23] for details.

Table 3 shows the final ranking scores over all the videos in VOT2014. Among the existing methods, the DSST approach provides the best results. Our tracker achieves the top final rank of , outperforming DSST and SAMF.

Overlap | Failures | Acc. Rank | Rob. Rank | Final Rank | |
---|---|---|---|---|---|

SRDCF | 0.63 | 15.90 | 6.43 | 10.08 | \textcolorred8.26 |

DSST | 0.64 | 16.90 | 5.99 | 11.17 | \textcolorblue8.58 |

SAMF | 0.64 | 19.23 | 5.87 | 14.14 | 10.00 |

## 6 Conclusions

We propose Spatially Regularized Discriminative Correlation Filters (SRDCF) to address the limitations of the standard DCF. The introduced spatial regularization component enables the correlation filter to be learned on larger image regions, leading to a more discriminative appearance model. By exploiting the sparsity of the regularization operation in the Fourier domain, we derive an efficient optimization strategy for learning the filter. The proposed learning procedure employs the Gauss-Seidel method to solve for the filter in the Fourier domain. We perform comprehensive experiments on four benchmark datasets. Our SRDCF outperforms existing trackers on all four datasets.

Acknowledgments: This work has been supported by SSF (CUAS) and VR (VIDI, EMC, ELLIIT, and CADICS).

## References

- [1] A. Adam, E. Rivlin, and Shimshoni. Robust fragments-based tracking using the integral histogram. In CVPR, 2006.
- [2] B. Babenko, M.-H. Yang, and S. Belongie. Visual tracking with online multiple instance learning. In CVPR, 2009.
- [3] C. Bao, Y. Wu, H. Ling, and H. Ji. Real time robust l1 tracker using accelerated proximal gradient approach. In CVPR, 2012.
- [4] V. N. Boddeti, T. Kanade, and B. V. K. V. Kumar. Correlation filters for object alignment. In CVPR, 2013.
- [5] D. S. Bolme, J. R. Beveridge, B. A. Draper, and Y. M. Lui. Visual object tracking using adaptive correlation filters. In CVPR, 2010.
- [6] C. Cortes and V. Vapnik. Support-vector networks. Machine Learning, 20(3):273–297, 1995.
- [7] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, 2005.
- [8] M. Danelljan, G. Häger, F. S. Khan, and M. Felsberg. Accurate scale estimation for robust visual tracking. In BMVC, 2014.
- [9] M. Danelljan, G. Häger, F. S. Khan, and M. Felsberg. Coloring channel representations for visual tracking. In SCIA, 2015.
- [10] M. Danelljan, F. S. Khan, M. Felsberg, and J. van de Weijer. Adaptive color attributes for real-time visual tracking. In CVPR, 2014.
- [11] T. B. Dinh, N. Vo, and G. Medioni. Context tracker: Exploring supporters and distracters in unconstrained environments. In CVPR, 2011.
- [12] M. Felsberg. Enhanced distribution field tracking using channel representations. In ICCV Workshop, 2013.
- [13] H. K. Galoogahi, T. Sim, and S. Lucey. Multi-channel correlation filters. In ICCV, 2013.
- [14] H. K. Galoogahi, T. Sim, and S. Lucey. Correlation filters with limited boundaries. In CVPR, 2015.
- [15] J. Gao, H. Ling, W. Hu, and J. Xing. Transfer learning based visual tracking with gaussian process regression. In ECCV, 2014.
- [16] S. Hare, A. Saffari, and P. Torr. Struck: Structured output tracking with kernels. In ICCV, 2011.
- [17] S. He, Q. Yang, R. Lau, J. Wang, and M.-H. Yang. Visual tracking via locality sensitive histograms. In CVPR, 2013.
- [18] J. F. Henriques, J. Carreira, R. Caseiro, and J. Batista. Beyond hard negative mining: Efficient detector learning via block-circulant decomposition. In ICCV, 2013.
- [19] J. F. Henriques, R. Caseiro, P. Martins, and J. Batista. Exploiting the circulant structure of tracking-by-detection with kernels. In ECCV, 2012.
- [20] J. F. Henriques, R. Caseiro, P. Martins, and J. Batista. High-speed tracking with kernelized correlation filters. PAMI, 2015.
- [21] X. Jia, H. Lu, and M.-H. Yang. Visual tracking via adaptive structural local sparse appearance model. In CVPR, 2012.
- [22] Z. Kalal, J. Matas, and K. Mikolajczyk. P-n learning: Bootstrapping binary classifiers by structural constraints. In CVPR, 2010.
- [23] M. Kristan, R. Pflugfelder, A. Leonardis, J. Matas, and et al. The visual object tracking vot2014 challenge results. In ECCV Workshop, 2014.
- [24] Y. Li and J. Zhu. A scale adaptive kernel correlation filter tracker with feature integration. In ECCV Workshop, 2014.
- [25] OpenCV. The opencv state of the art vision challenge. http://code.opencv.org/projects/opencv/wiki/VisionChallenge. Accessed: 2015-09-17.
- [26] S. Oron, A. Bar-Hillel, D. Levi, and S. Avidan. Locally orderless tracking. In CVPR, 2012.
- [27] P. Perez, C. Hue, J. Vermaak, and M. Gangnet. Color-based probabilistic tracking. In ECCV, 2002.
- [28] D. Ross, J. Lim, R.-S. Lin, and M.-H. Yang. Incremental learning for robust visual tracking. IJCV, 77(1):125–141, 2008.
- [29] L. Sevilla-Lara and E. G. Learned-Miller. Distribution fields for tracking. In CVPR, 2012.
- [30] A. Smeulders, D. M. Chu, R. Cucchiara, S. Calderara, A. Dehghan, and M. Shah. Visual tracking: An experimental survey. PAMI, 36(7):1442–1468, 2014.
- [31] J. van de Weijer, C. Schmid, J. J. Verbeek, and D. Larlus. Learning color names for real-world applications. TIP, 18(7):1512–1524, 2009.
- [32] D. Wang, H. Lu, and M.-H. Yang. Least soft-threshold squares tracking. In CVPR, 2013.
- [33] Y. Wu, J. Lim, and M.-H. Yang. Online object tracking: A benchmark. In CVPR, 2013.
- [34] Y. Wu, J. Lim, and M.-H. Yang. Object tracking benchmark. PAMI, 2015.
- [35] J. Zhang, S. Ma, and S. Sclaroff. MEEM: robust tracking via multiple experts using entropy minimization. In ECCV, 2014.
- [36] K. Zhang, L. Zhang, and M. Yang. Real-time compressive tracking. In ECCV, 2012.
- [37] W. Zhong, H. Lu, and M.-H. Yang. Robust object tracking via sparsity-based collaborative model. In CVPR, 2012.