Visual Tracking Using Sparse Coding and Earth Mover's Distance

An efficient iterative Earth Mover's Distance (iEMD) algorithm for visual tracking is proposed in this paper. The Earth Mover's Distance (EMD) is used as the similarity measure to search for the optimal template candidates in feature-spatial space in a video sequence. The local sparse representation is used as the appearance model for the iEMD tracker. The maximum-alignment-pooling method is used for constructing a sparse coding histogram which reduces the computational complexity of the EMD optimization. The template update algorithm based on the EMD is also presented. When the camera is mounted on a moving robot, e.g., a flying quadcopter, the camera could experience a sudden and rapid motion leading to large inter-frame movements. To ensure that the tracking algorithm converges, a gyro-aided extension of the iEMD tracker is presented, where synchronized gyroscope information is utilized to compensate for the rotation of the camera. The iEMD algorithm's performance is evaluated using eight publicly available videos from the CVPR 2013 dataset. The performance of the iEMD algorithm is compared with eight state-of-the-art tracking algorithms based on relative percentage overlap. Experimental results show that the iEMD algorithm performs robustly in the presence of illumination variation and deformation. The robustness of this algorithm for large inter-frame displacements is also illustrated.


INTRODUCTION
Visual tracking is an important problem in the field of computer vision.Given a sequence of images, tracking is the procedure of generating the inference about the motion of the target.There are a variety of applications for visual tracking.The information generated from these images by the tracking algorithm can be utilized by vehicle navigation, human-robot interaction, and motion-based recognition algorithms (Dani et al., 2013;Ravichandar and Dani, 2015;Chwa et al., 2016).Visual tracking algorithms provide important information for visual simultaneous localization and mapping (SLAM), structure from motion (SfM) and video-based control (Dani et al., 2012;Yang et al., 2015;Davison et al., 2007).
Image-based tracking algorithms are categorized as point tracking, kernel tracking, or silhouette tracking (Yilmaz et al., 2006).Distinguishing features, such as color, shape, and region are selected to identify objects for visual tracking.Modeling the object which could adapt to the slowly changing appearance is challenging, due to the illumination variants, object deformation, occlusion, motion blur or background clutters.Supervised or unsupervised online learning algorithms are often used to robustly find and update the distinguishing features of the object, such as using variance ratios of the feature value's log likelihood (Collins et al., 2005), the online Ada-boost feature selection method (Grabner and Bischof, 2006) and incremental learning (Ross et al., 2008).
Approaches in visual tracking could be generally classified into two groups, either generative methods or discriminative methods.For generative methods, the tracked object is modeled based on the selected features, such as the color histogram, sparse coding representation or kernels.Then, correspondence or similarity measurement between the target and the candidate across frames is constructed.Similarity measurements are derived through several methods, such as the Normalized Cross Correlation (NCC) (Bolme et al., 2010;Zhu et al., 2016), the Earth Mover's Distance (EMD) (Zhao et al., 2010;Oron et al., 2012;Karavasilis et al., 2011), the Bhattacharyya Coefficient (BC) (Comaniciu et al., 2003) and point-to-set distance metric (Wang et al., 2015(Wang et al., , 2016)).Location of the candidate object in the consecutive frames is estimated by using the Kalman filter, particle filter or gradient descent method.Discriminative methods regard tracking as a classification problem and build a classifier or ensemble of classifiers to distinguish the object from the background.Representative classification tracking algorithms are the structured Support Vector Machine (SVM) (Hare et al., 2011), Convolutional Neural Nets (Li et al., 2016).Ensemble based algorithms such as ensemble tracking (Avidan, 2007), multiple instance learning (MIL) (Babenko et al., 2011), and online boosting tracker (Grabner and Bischof, 2006).
In order to robustly track moving objects in challenging situations, many tracking frameworks are proposed.Tracking algorithms with Bayesian filtering are developed to track moving objects.These algorithms can handle complete occlusion (Zivkovic et al., 2009).The non-adaptive methods, usually only model the object from the first frame.Although less error prone to occlusions and drift, they are hard to track the object undergoing appearance variations.However, adaptive methods are usually prone to drift because they rely on self updates of an online learning method.In order to deal with this problem, combining adaptive methods with the complementary tracking approaches leads to more stable results.For example, parallel robust online simple tracking (PROST) framework combines three different trackers (Santner et al., 2010): tracking-learning-detection (TLD) framework uses P-N experts to make the decision on the location of the moving object, based on the results from the Median-Flow tracker and detectors (Kalal et al., 2012), and online adaptive hidden Markov model for multi-tracker fusion (Vojir et al., 2016).
The emphasis of this paper is on the similarity measurement and target localization.The EMD is adopted as the similarity measure and an efficient iterative EMD algorithm is proposed for visual tracking.The contributions of the paper are summarized as follows: • The maximum-alignment-pooling method for local sparse coding is used to build a histogram of appearance model.An iEMD tracking algorithm is developed based on this local sparse coding representation of the appearance model.It is shown using videos from publicly available benchmark datasets that the iEMD tracker shows good performance in terms of percentage overlap compared to the state-of-the-art trackers available in literature.
• Gyro-measurements are used to compensate for the pan, tilt, and roll of the camera.Then the iEMD visual tracking algorithm is used to track the target after compensating for the movement of the camera.
By this method, the convergence of the algorithm is ensured, thus providing a more robust tracker which is more capable of real-world tracking tasks.
The paper is organized as follows.Related work on the computation of the EMD and its application for visual tracking is illustrated in Section 2. In Section 3, the iEMD algorithm for visual tracking is developed.
In Section 4, the target is modeled as the sparse coding histogram.For the sparse coding histogram, the maximum-alignment-pooling method is proposed to represent the local image patches.In Section 5, two extensions of the iEMD algorithm that includes the template update method, and the method of using the gyroscope data for ego-motion compensation are discussed.In Section 6, the iEMD tracker is validated on eight publicly available datasets, and the comparisons with seven state-of-the-art trackers are shown.
Experimental results using the gyro-aided iEMD algorithm are compared with tracking results without gyroscope information.The conclusions are given in Section 7.

RELATED WORK
In real-world tracking applications, variations in appearance are a common phenomenon caused by illumination changes, moderate pose changes or partial occlusions.The Earth Mover's Distance (EMD) as a similarity measure, also known as 1-Wasserstein distance (Baum et al., 2015;Guerriero et al., 2010), is robust to these situations (Rubner et al., 2000).However, the major problem with the EMD is its computational complexity.Several algorithms for the efficient computation of the EMD are proposed.
For example, the EMD-L 1 algorithm is used for histogram comparison (Ling and Okada, 2007) and the EMDs are computed with the thresholded ground distances (Pele and Werman, 2009).In the context of visual tracking, although the EMD has the merit of being robust to moderate appearance variations, the efficiency of the computation is still a problem.Since solving the EMD is a transportation problem -a linear programming problem (Rubner et al., 2000), the direct differential method cannot be used.There are some efforts to employ the EMD for object tracking.The Differential Earth Mover's Distance (DEMD) algorithm (Zhao et al., 2010) is first proposed for visual tracking, which adopts the sensitivity analysis to approximate the derivative of the EMD.However, the selection of the basic variables and the process of identifying and deleting the redundant constraints still affect the efficiency of the algorithm (Zhao et al., 2010).The DEMD algorithm combined with the Gaussian Mixture Model (GMM), which has fewer parameters for EMD optimization, is proposed in (Karavasilis et al., 2011).The EMD as the similarity measure combined with the particle filter for visual tracking is proposed in (Oron et al., 2012).
Sparse coding has been successfully applied to visual tracking (Zhang et al., 2013).In sparse coding for visual tracking, the largest sum of the sparse coefficients or the smallest reconstruction error is used as the metric to find the target from the candidate templates using particle filter (Mei and Ling, 2009;Jia et al., 2016).The sparse coding process is usually the L 1 norm minimization problem, which makes the sparse representation and dictionary learning computationally expensive.To reduce the computational complexity, the sparse representation as the appearance model is combined with the Mean-shift (Liu et al., 2011) or Mean-transform method (Zhang and Hong Wong, 2014).After a small number of iterations by these methods, the maximum value of the Bhattacharyya coefficient corresponding to the best candidate is obtained.
The success of the gradient descent based tracking algorithm depends on the assumption that the object motion is smooth and contains only small displacements (Yilmaz et al., 2006).However, in practice, this assumption is always violated due to the abrupt rotation and shaking movement of the camera mounted on a robot, such as a flying quadcopter.Efforts have been made to combine the gyroscope data with tracking algorithms, such as the Kanade-Lucas-Tomasi (KLT) tracker or the MI tracker (Hwangbo et al., 2011;Ravichandar and Dani, 2014;Park et al., 2013).To robustly track a static object using a moving camera, gyroscope data are directly utilized to estimate the initial location of the static object.When both the camera and the object being tracked are in motion, the gyroscope sensor data are utilized to compensate for the rotation of the camera, because rotation has a greater impact on the positional changes compared with the translation in video frames.Then, the visual tracking algorithm is applied to track the moving object.The robustness of the tracking algorithm is improved due to the compensation of the camera's ego-motion.Therefore, our method makes the EMD tracker more robust to this situation.

ITERATIVE EMD TRACKING ALGORITHM
In the context of visual tracking, first a feature space is chosen to characterize the object, then, the target model and the candidate model are built in the feature-spatial space.The probability density functions (histograms) representing the target model and the candidate model are (Comaniciu et al., 2003) target model: p = {p u } u=1,...,N T and , where pu is the weight of the uth bin of the target model p, assuming the center of the template target is at (0, 0), qv is the weight of the vth bin of the candidate model q(y), assuming the center of the template candidate is at y, N T and N C are the numbers of the bins.
Based on the target model and the candidate model, the dissimilarity function is denoted as f (p, q(y)).The optimization problem for tracking is to estimate the optimal displacement ŷ which gives the smallest value of f (p, q(y)).Thus, the optimization problem is formulated as In ( 1), the center of the template target is assumed to be positioned at (0, 0), and the center of the template candidate is at y.The goal is to find the candidate model located at ŷ that gives the smallest value of the dissimilarity function f (p, q(y)).The differential tracking approaches are usually applied to solve this optimization problem, with the assumption that the displacement of the target between two consecutive frames is very small.
The optimization problem in (1) is solved using the iEMD algorithm as described in the following sub-sections.The iEMD algorithm iterates between finding the smallest EMD between template target and the template candidate based on the current position y k by the transportation-simplex method (see Section 3.2 for details) and finding the best position y k+1 leading to the smallest EMD by gradient method (see Section 3.3 for details).

EMD as a Similarity Measure
In this section, the Earth Mover's Distance (EMD) between the target model p and the candidate model q(y) is used as the similarity measure.Solving the EMD is a transportation problem -a linear programming problem as shown in Fig. 1.Intuitively, given the target model and the candidate model, one is thought of as a set of factories and the other as a set of shops.Suppose that a given amount of goods produced by the factories are required to be delivered to the shops, each with a given limited capacity.The cost to ship a unit of goods from every factory to different shops is not equivalent.Then the EMD is considered as the smallest overall cost of sending the weights (goods) from the target model to the candidate model.The EMD is defined as (Rubner et al., 2000) (2) subject to where D is the optimal solution to this transportation problem, f uv (p, q(y)) is the flow (weight) from the uth bin of p to the vth bin of q(y), d uv is the ground distance (cost) between the uth and the vth bins, the subscript T denotes the object target and C is the object candidate, w T,u is the weight from the uth bin of p, and w C,v is the weight from the vth bin of q(y).

EMD as a Function of Weights
Writing the above equation set ( 2)-( 6) in a matrix form as the matrix which consists of 0s and 1s.
In order to relate the EMD with the weight vector, the above primal problem in ( 7) is restated in its dual form as (Dantzig and Thapa, 2006) where π ∈ R N T +N C is a vector of variables to be optimized in the dual problem.By solving this dual problem in (8), the optimal solution D is calculated and directly represented as the linear equation of weights.However, considering the computation efficiency, the optimal solution (EMD) is first calculated from the primal problem in (7) using the transportation-simplex method, and then the EMD is represented as the function of the weights by the matrix transformation.
Using the transportation-simplex method (Rubner et al., 2000), the optimal solution to the EMD problem in ( 7) is calculated.The transportation-simplex method is a streamlined simplex algorithm, which is built on the special structure of the transportation problem.In order to reduce the number of iterations of the transportation-simplex method, the Russell's method is used to compute the initial basic feasible solution (Rubner et al., 2000;Ling and Okada, 2007).The DEMD algorithm (Zhao et al., 2010) uses the standard simplex method to compute the optimal solution to the linear optimization problem in (7).Compared with the standard simplex method, the transportation-simplex method greatly decreases the number of operations (Ling and Okada, 2007).Thus, the iEMD algorithm is more efficient in terms of the number of operations to solve the EMD problem compared with the DEMD algorithm in (Zhao et al., 2010).
The computation of the EMD is a transportation problem, which has exactly and each constraint is a linear combination of the other N T + N C − 1 constraints, which could be considered as redundant and discarded (Dantzig and Thapa, 2006).Based on the optimal solution to the linear programming problem, the flow vector is separated into basic variables and non-basic variables as , and the ground distance vector d and H will be transformed as 1) .In order to derive the EMD as a function of the weights of the candidate model, the matrix transformation is performed.First, the last row of the constraint matrices ( 7) is deleted which is considered as the redundant constraint and then the matrices H B , H, and w are formulated as The problem in ( 7) is reformulated based on the optimal solution as Left multiplying ( 10) with Left multiplying (11) by d B T and adding the resultant to (9) gives where

Gradient Method to Find the Template Displacement
Based on the equation ( 13) , the gradient method is utilized to find the displacement y of the target candidate as The optimal location ŷ of the template candidate q(y) is found by iteratively performing: (1) calculate the smallest EMD and reformulate it as (13); (2) search for the new location of the template candidate along the direction of ( 14).When the EMD no longer decreases, the iteration stops.By this method, the best match of the template target and the template candidate will be found.The EMD plays three roles in this algorithm: (1) it provides a metric of the matching between the template target and the template candidate; (2) it assigns more weights to the best matches between the histogram bins and assigns smaller weights or no weights to unmatched bins by linear optimization; (3) matched bins are used for finding the location of the template candidate, and the gradient vector of the EMD for searching the optimal displacement is calculated.
The pseudo-code for the iEMD tracking algorithm is given in Algorithm 1.

TARGET MODELING BASED ON HISTOGRAMS OF SPARSE CODES
Histogram of sparse codes (HSC) has been widely used as feature descriptors in many fields (Zhang et al., 2013).Given the image set of the first L image templates from a video, a set of K overlapped local image patches are sampled by a sliding window of size m × n from each template to build a dictionary Φ ∈ R (mn)× (LK) .Each column of Φ is a basis vector, which is a vectorized local image patch extracted from the set of image templates.The basis vectors are overcomplete where mn < LK.Similarly, for a given image template target I, a set of overlapped local image patches E = r | r ∈ R (mn)×1 , r = 1 Represent the EM D pre by its weight vector w T using (13); where a r ∈ R (LK)×1 is the coefficient vector which is sparse and n ∈ R (mn)×1 is the noise vector.The coefficient a r is computed by solving the following L 1 norm minimization problem (Zhang et al., 2013;Mairal et al., 2014) where is the sparse coefficients of the local patch, a ij corresponds to the j th patch of the ith image template of the dictionary, and λ is the Lagrange multiplier.
Once a solution to ( 16) is obtained, the maximum-alignment-pooling method is used to construct the sparse coding histograms.Combining the coefficients corresponding to the dictionary patches that have the same locations in the template using āj = L i=1 a ij (Jia et al., 2016), a new vector ār = [ ā1 , • • • , āj ] T ∈ R K×1 is formulated.The weight of the rth local image patch r in the histogram of sparse codes is computed by using pru = ār ∞ .The pru value corresponds to the uth image patch from ār .With J local image patches from the template target, the histogram is constructed as In the spatial space, the Epanechnikov kernel is used to represent the template.The Epanechnikov kernel (Comaniciu et al., 2003) is an isotropic kernel with a convex profile which assigns smaller weights to pixels away from the center.Given the target histogram p in (17), the isotropic kernel is applied to generate the histograms of target weighted by the spatial locations.The weights of the histogram of the target w T,u are computed using where c r is the center of the rth image patch of the template target, h is template size and γ is the normalization constant.The candidate histogram q is built in the same way as p.An isotropic kernel is applied to the elements of the q for generating the histogram of candidate with spatial locations.The weights of the candidate histogram w C,v (x i − y) are computed using where y is the displacement of the rth image patch of the template candidate.The ground distance d uv for the EMD in ( 2) is defined by where α ∈ (0, 1) is the weighting coefficient, u ∈ R (mn)×1 , v ∈ R (mn)×1 are the vectors of the normalized pixel values of the image patch from the target and candidate templates, sampled in the same way as the image patches from the dictionary, and c u , c v are the corresponding centers of the image patches.

Template Update
In order to make the tracker robust to significant appearance variations during long video sequences, the outdated templates in the dictionary should be replaced with the recent ones.To adapt to the appearance variations of the target and alleviate the drift problem only the latest template in dictionary is replaced based on the weight ω i , which is computed by where ω i is the weight associated with the template, γ 0 is a constant, ∆i is the time elapsed since the dictionary was last updated measured in terms of image index k and D * k is the EMD value corresponding to the template I k .
If the weight of the current template based on ( 21) is smaller than the weight of the latest template in the dictionary, the template is replaced with the current one.In order to avoid the errors and noises affecting the dictionary update algorithm, the reconstructed template is used to replace the one in the dictionary.Firstly, the following problem is solved in order to recompute the sparse code coefficients, a k , min where Φ T ∈ R (mn)×K is a dictionary formed using the vectorized template image with the size m × n as columns, I mn×mn is the identity matrix, a k ∈ R K+mn is the vector of the sparse coding coefficients, and λ is the Lagrange multiplier (cf., (Jia et al., 2016)).Then the reconstructed template is calculated using Φ T a * k , where a * k ∈ R K is computed using components of a k corresponding to the dictionary.The reconstructed template is used to replace the latest template in the dictionary.The detailed steps of the update scheme are given in Algorithm 2. Algorithm 2: Template update procedure.Input :The tracked template I k and the EMD value D * k at frame k, the current dictionary and the associated weights of the latest template in dictionary ω i−1 .Output :The updated dictionary Φ i and weights ω i . 1 Compute the weight of the current template using ω Calculate the reconstructed template via ( 22);

Gyroscope Data Fusion for Rotation Compensation
The general idea of the gyro-aided iEMD tracking algorithm is combining the image frames from the camera with the angular rate generated by the gyroscope for visual tracking.Synchronization of the camera and the gyroscope in time is required.The spatial relationship between the camera and the gyroscope must also be pre-calibrated.Then, the angular rate generated by the gyroscope is applied to compensate for the ego-motion of the camera.After the compensation of the ego-motion of the camera, the iEMD tracker is applied for tracking.In this section, the gyro-aided iEMD tracking algorithm is developed and illustrated.
When a camera is mounted on a moving robot, the motion of the camera will cause a large displacement of the target between two consecutive frames.If the displacement is larger than the convergence region, the tracking algorithm will become susceptible to the large appearance changes and fail (Comaniciu et al., 2003;Hwangbo et al., 2011;Ravichandar and Dani, 2014).In order to improve the robustness of the tracking algorithm, the displacement caused by the camera rotation is estimated and compensated by fusing the data from the gyroscope, which is a commonly used sensor on flying robots.The rotation of the camera causes a larger displacement of the target compared with the translation movement in video-rate frames.Thus, the translation is neglected here.
The gyroscope provides the angular rate along three axes, which measure the pan, tilt, and roll of small time intervals ∆t.In the case of pure rotation without translation, the angular rate ω y is obtained along three axes x, y and z.Let q(k), q(k + 1) ∈ H denote the quaternion of two frames k and k + 1 during time ∆t, the relationship between them is given as (cf.(Spong et al., 2006)) where Ω(ω) is the skew-symmetric matrix of ω as After the quaternion q(k + 1) = m + ai + bj + ck is normalized and updated, the rotation matrix R k+1 k is calculated as Thus, the estimated homography matrix between two templates is estimated by where, K is the intrinsic camera calibration matrix that is accessed by calibrating the camera.The homography matrix is updated to the newest frame location p(k + 1) = [x c , y c , 1] T , where (x c , y c ) is the center point of the template, based on the following equations: for the first frame, H 0 = I 3×3 .This new location p(k + 1) = [x c , y c , 1] T is then used as the initial guess of the object candidate and the probability of the tracking algorithm to find the location of the object target in the new video frame is improved.
The pseudo-code for gyro-aided iEMD algorithm is given in Algorithm 3.

EXPERIMENTS
In this section, the iEMD algorithm is validated on real datasets.The algorithm is implemented by MATLAB R2015b, the C code in (Rubner et al., 2000) is adopted for the EMD calculation, and the software in (Mairal et al., 2014)  The tracker is initialized with the ground-truth bounding box of the target in the first frame.Then the tracking algorithm runs till the end of the sequence and generates a series of the tracked bounding boxes.Tracking results from consecutive frames are compared with the ground truth bounding boxes provided by this dataset.The relative overlap measure is used to evaluate this algorithm as (Wu et al., 2013) where R tr is the tracking result, represented by the estimated image region occupied by the tracked object, R gt is the ground truth bounding box.R tr ∩ R gt is the intersection and R tr ∪ R gt is the union of the two regions.The range of the relative overlap is from 0 to 100% .

Results for the iEMD tracker with sparse coding histograms
In this subsection, the performance of the iEMD tracker with sparse coding histograms and the template update method is evaluated using the eight sequences.In our approach, the object windows are re-sized to 32 × 32 pixels for all the sequences, except for the Walking sequence, in which the object windows are resized to 64 × 32 pixels due to the smaller object size.The local patches in each object window are sampled with the size 16 × 16 pixels with step size 8 in sequences like Car4, Walking and Car2.For other sequences, the local patches in each object window are sampled with the size 8 × 8 pixels with step size 4.In the case of the abrupt motions of the object, 4 more particles are generated by moving the template in the surrounding area of the initial object position.For each particle, the template is enlarged and shrunk by 2% in case of the scale variations.
The performance of the proposed algorithm is compared with seven state-of-the-art tracking algorithms on eight video sequences.These state-of-the-art trackers include: ASLA (Jia et al., 2012), Frag (Adam et al., 2006), IVT (Ross et al., 2008), L1APG (Mei and Ling, 2011), LOT (Oron et al., 2012), MTT (Zhang et al., 2012), and STRUCK (Hare et al., 2011).The source codes of the trackers are downloaded from the corresponding web pages and the default parameters are used.The average percentage overlap obtained by all the tracking algorithms on eight video sequences are reported in Table 2.The iEMD tracker achieves the highest average overlap over all the sequences.The iEMD tracker also achieves the second best tracking  Table 2.The average overlap (in percentage) obtained by the tracking algorithms on eight datasets.For each sequence, the first, second and third ranks are marked in red, green and blue respectively.The last row is the average value of the percentage overlap for each tracker over all sequences.
Sequence ASLA Frag IVT L1APG Representative tracking results obtained by iEMD algorithm are shown in Fig. 2. In the Human8 and Bolt2 sequences, the targets have significant illumination variations, and deformations, respectively.Only LOT and iEMD trackers are able to track the target in all the frames.Both of them use the EMD as the similarity measure and their appearance models are based on local image patches, which make the trackers more robust to illumination changes and deformations (Oron et al., 2012;Rubner et al., 2000).In woman sequence, all the trackers start to drift away from the target in frame 124 except for the iEMD and STRUCK trackers.For the Car2 and Car4 sequences, there are significant illumination changes when the targets pass underneath the trees and the overpasses.The LOT and Frag trackers start drifting away from frame 72 in Car2 sequences.In Car4 sequence, the LOT tracker starts to lose the target from frame 15, and the Frag and L1APG trackers drift away when the car passes the overpass in frame 249.In Walking2 sequence, the LOT, Frag, and ASLA trackers start tracking the wrong target in frame 246, due to the similar colors of the clothes between the two people.

Results for the Gyro-aided iEMD tracking algorithm
The test of the gyro-aided iEMD tracking algorithm is conducted using the sequence including 100 frames from the dataset provided by CMU (Hwangbo et al., 2011).The size of the template is changed by ±10% and the template with the best scale is found, giving the smallest EM D. The images are taken in front of a desk with motions, such as shaking and rotation.The frame sequences have a resolution of 640 × 480 at 30FPS.The gyroscope is carefully aligned with the camera and the tri-axial gyroscopic values are sampled at 11Hz in the range of ±200deg/sec (Hwangbo et al., 2011).Using the time stamps of the camera and the gyroscope, the angular rate data are synchronized with the frames captured by the camera.
The comparisons between the tracking results using the iEMD tracker with and without the gyroscope information are illustrated in Fig. 4. The head of the eagle is chosen as the target and the ground truth is manually labeled in each frame.The magenta box indicates the estimated image region without using the gyroscope data, and the cyan box is the tracking results of the gyro-aided iEMD tracker.Without the gyroscope data, the tracker loses the target after the frame 25.However, the head of the eagle is successfully tracked with our gyro-aided iEMD tracking algorithm.The performances of the iEMD tracker with and without the gyroscope information on the CMU sequence are summarized in Table 3.The value of the average overlap, the percentage of the total frame numbers of which the overlap is greater than 0 and 40% are reported.Gyroscope information provides a good initial position for the iEMD tracker to estimate the location of the target.Thus, the gyro-aided iEMD tracking algorithm is robust to the rapid movements of the camera.
Table 3. Evaluation results on the CMU dataset using the iEMD tracker with and without the gyroscope information.

Discussion
As a cross-bin metric for the comparison of the histograms, the advantages of the EMD are demonstrated in situations such as illumination variation, object deformation and partial occlusion.The iEMD algorithm uses the transportation-simplex algorithm for calculating the EMD in the experiments, of which the practical running time complexity is the supercubic (a complexity in Ω(N 3 ) O(N 4 )) (Rubner et al., 2000), where N represents the number of the histogram bins.Other algorithms for calculating the EMD can be used to further shorten the running time (Ling and Okada, 2007;Pele and Werman, 2009).The experimental results, especially the Human8 and Bolt2 sequences, show that the iEMD tracker is robust to the appearance variations.The experimental results of Walking2 shows that the iEMD tracker can discriminate the target from the surroundings with similar colors.The tracking results from Woman and Subway sequences demonstrate the robustness to partial occlusions.Since the local sparse representation is adopted as the appearance model, the methods such as the trivial templates, learning dictionary from the target and background images, could be adopted to improve the performance of iEMD tracker.As a gradient descent based dynamic model, the iEMD tracker, which provides good location prediction, can be further improved with more effective particle filters.The metrics used by sparse coding, such as the largest sum of the sparse coefficients or the smallest reconstruction error, can be combined with the EMD to make the tracker more discriminant.

CONCLUSION
This paper presents iEMD and gyro-aided iEMD visual tracking algorithms.The local sparse representation is used as the appearance model for the iEMD tracker.The maximum-alignment-pooling method is used for constructing a sparse coding histogram which reduces the computational complexity of the EMD optimization.The template update algorithm based on the EMD is also presented.The iEMD tracker is robust to variations in appearance of the target, deformations and partial occlusions.Experiments conducted on eight publicly available datasets show that the iEMD tracker is robust to the illumination changes, deformations and partial occlusions of the target.To validate the gyro-aided iEMD tracking algorithm, experimental results from the CMU dataset, which contains rapid camera motion are presented.Without the gyroscope measurements, the iEMD tracker fails on the CMU dataset.With the help of the gyroscope measurements, the iEMD algorithm is able to lock onto the target and track it successfully.The above experimental results show that the proposed iEMD tracking algorithm is robust to the appearance changes of the target as well as the ego-motion of the camera.

Figure 1 .
Figure 1.EMD comparison of the two templates.

89
Calculate the derivative of the EM D with respect to the displacement y using (14); Move the template candidate in I k+1 along the gradient vector by one pixel; Compute EM D between the target model and the new candidate model; if EM D pre < EM D then break; the EM D pre = EM D ; end end vectors of the dictionary Φ as follows r = Φa r + n (15)

Algorithm 3 :56
Gyro-aided iEMD tracking algorithm 1 Set the maximum iteration number n iter , n scal ; 2 Capture the image I 0 ; 3 if t = 0 then 4 Display the first image I t ; Request user to select template to be tracked ; Construct the target model from the template; 7 end 8 while tracking do 9 Capture the image I t+1 ; Obtain the angular rate from gyroscope; Integrate angular rates to obtain inter-frame rotation, R t+1 t using (25); Compute 2D homography H gyro using (26); Initialize the location of the template using (28); Track the target by the iEMD tracking algorithm from Algorithm 1 end

Figure 2 .
Figure 2. The visual tracking results obtained by the eight tracking algorithms on the eight video sequences.

Figure 3 .
Figure 3. Success plots ((a)-(j)) for the eight tracking algorithms on the eight sequences.

Figure 4 .
Figure 4. Results of the iEMD tracker in presence of rapid camera motion; the magenta boxes indicate the results of the iEMD tracker without the gyroscope information, and the cyan boxes indicate the results of the gyro-aided iEMD tracker.
Set the maximum iteration number n iter ; 2 Calculate the target model from the image I 0 using (18); 3 Get the new image frame I k+1 ; 4 Construct the candidate model from I k+1 using (19); 5 Compute EM D pre between the target model and the candidate model; 6 for n=0 to n iter do 7 • • • J are sampled by the same sliding window of size m × n with the step size of one pixel.Each image patch r , which represents one fixed part of the target object, can be encoded as a linear combination of a few basis Algorithm 1: iEMD tracking algorithm 1

Table 1 .
(Wu et al., 2013)es of the video sequences.Target size: the initial target size in the first frame, IV: illumination variation, SV: scale variation, OCC: occlusion, DEF: deformation, MB: motion blur, FM: fast motion, BC: background clutters.Sequence Frames Image size Target size IV SV OCC DEF MB FM BC Human8, and Walking2 sequences are from the visual tracker benchmark(Wu et al., 2013)(CVPR 2013, http://www.visual-tracking.net).The length of the sequences varies between 128 and 913 frames with one object being tracked in each frame.
is used for sparse modeling.The platform is Microsoft Windows 7 professional with Intel(R) Core(TM) i5-4590 CPU.Eight publicly available datasets are chosen to validate the iEMD tracking algorithm.The main attributes of the video sequences are summarized in Table1.The Car2, Walking,