9512.net

甜梦文库

甜梦文库

当前位置：首页 >> >> # Tracking multiple objects with particle filtering

I. INTRODUCTION

Tracking Multiple Objects with Particle Filtering

C. HUE J-P. LE CADRE, Member, IEEE ? IRISA/Universite de Rennes1, IRISA/CNRS France ? P. PEREZ Microsoft Research

We address the problem of multitarget tracking (MTT) encountered in many situations in signal or image processing. We consider stochastic dynamic systems detected by observation processes. The difficulty lies in the fact that the estimation of the states requires the assignment of the observations to the multiple targets. We propose an extension of the classical particle filter where the stochastic vector of assignment is estimated by a Gibbs sampler. This algorithm is used to estimate the trajectories of multiple targets from their noisy bearings, thus showing its ability to solve the data association problem. Moreover this algorithm is easily extended to multireceiver observations where the receivers can produce measurements of various nature with different frequencies.

Manuscript received December 21, 2000; revised January 24, 2002; released for publication February 25, 2002. IEEE Log No. T-AES/38/3/11413. Refereeing of this contribution was handled by P. K. Willett. Authors’ addresses: C. Hue and J-P. Le Cadre, IRISA, Campus de Beaulieu, 35042 Rennes Cedex, France, E-mail: ? (fchue,lecadreg@irisa.fr); P. Perez, 7 J.J. Thomson Avenue, Cambridge, CB3 0FB, UK, E-mail: (pperez@microsoft.com).

c 0018-9251/02/$17.00 ° 2002 IEEE

Multitarget tracking (MTT) deals with the state estimation of an unknown number of moving targets. Available measurements may both arise from the targets if they are detected, and from clutter. Clutter is generally considered as a model describing false alarms. Its (spatio-temporal) statistical properties are quite different from those of the target, which makes the extraction of target tracks from clutter possible. To perform MTT the observer can rely on a huge amount of data, possibly collected on multiple receivers. Elementary measurements are receiver outputs, e.g., bearings, ranges, time-delays, Dopplers, etc. The main difficulty, however, comes from the assignment of a given measurement to a target model. These assignments are generally unknown, as are the true target models. This is a neat departure from classical estimation problems. Thus, two distinct problems have to be solved jointly: the data association and the estimation. The simplest approach is probably the nearest neighbor approach. Using only the observation the closest to the predicted state, this algorithm is not robust enough in many situations. As long as the association is considered in a deterministic way, the possible associations must be exhaustively enumerated. This leads to an NP-hard problem because the number of possible associations increases exponentially with time, as in the multiple hypotheses tracker (MHT) [1]. To cope with this problem, pruning and gating eliminate the less likely hypotheses but can unfortunately eliminate good ones as well. In the joint probabilistic data association filter (JPDAF) [2], the association variables are considered as stochastic variables and one needs only to evaluate the validated association probabilities at each time step. However, the dependence assumption on the associations implies the exhaustive enumeration of all possible associations at the current time step. When the association variables are instead supposed statistically independent like in the probabilistic MHT (PMHT [3, 4]), the complexity is reduced. For instance in [3, 4], the algorithm is presented as an incomplete data problem solved by an EM algorithm. There is no measurement gating as in the JPDAF and all the associations are considered. The results are then satisfactory when the measurement equation is linear and when the trajectories are deterministic. In [5] the algorithm is extended to the tracking of maneuvering targets with an hidden “model-switch” process controlled by a Markov probability structure. A comparison of the PMHT with the JPDAF is described in a practical two-target scenario in [6], focusing on the mean-square estimation errors and the percentage of lost tracks. Unfortunately, the above algorithms do not cope with nonlinear models and non-Gaussian noises.

JULY 2002 791

IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 38, NO. 3

Under such assumptions (stochastic state equation and nonlinear state or measurement equation, non-Gaussian noises), sequential Monte Carlo methods, also called particle filtering methods, are particularly appealing. They mainly consist of propagating, in a possibly nonlinear way, a weighted set of particles which approximates the probability density of the state conditioned on the observations according to Monte Carlo integration principles. The weights of the particles are updated using Bayes’s formula. Particle filtering can be applied under very general hypotheses, is able to cope with heavy clutter, and is very easy to implement. Such filters have been used in very different areas for Bayesian filtering, under different names: the bootstrap filter for target tracking in [7] and the Condensation algorithm in computer vision [8] are two examples among others. In earliest studies, the algorithm was only composed of two periods: the particles were predicted according to the state equation during the prediction step; then their weights were calculated with the likelihood of the new observation combined with the former weights. A resampling step has rapidly been added to dismiss the particles with lower weights and avoid the degeneracy of the particle set into a unique particle of high weight [7]. Many ways have been developed to accomplish this resampling whose final goal is to enforce particles in areas of high likelihood. The frequency of this resampling has also been studied. Also the use of kernel filters [9] has been introduced to regularize the sum of Dirac densities associated to the particles when the dynamic noise of the state equation was too low [10]. Despite this long history of studies, in which the ability of particle filter to track multiple posterior modes is claimed, the extension of the particle filter to multiple target tracking has progressively received attention only in the last five years. Such extensions have first been claimed theoretically feasible in [11, 12] but the examples chosen only dealt with one single target. In computer vision a probabilistic exclusion principle has been developed in [13] to track multiple objects but the algorithm is very dependent of the observation model and is only applied for two objects. In the same context, a Bayesian multiple-blob tracker called BraMBLe [14] has just been proposed. It deals with a varying number of objects which are depth-ordered thanks to a 3-D state space. Lately, in mobile robotic [15], a set of particle filters for each target connected by a statistical data association has been proposed. We propose here a general algorithm for MTT in the passive sonar context. This work is organized as follows. In Section II, we describe the basic particle filter for a single target with two versions for the resampling step, which can be systematic or adaptive. The superiority of adaptive resampling towards systematic resampling is clearly demonstrated in a passive sonar application in

792

Section III. Section IV, the central part of this work, deals with an extension of the basic filter to multiple objects. The new algorithm combines the two major steps of prediction and weighting of the classical particle filter with a Gibbs sampler computing an estimation of the vector of the assignment probabilities. We introduce two different versions of this Gibbs sampler which take into account the past information in a different way. An extension to multireceiver data in the context of multiple targets ends this section and highlights the versatility of our approach. Finally, Section V is devoted to an application to bearings-only MTT which enables us to compare the results obtained with these two versions. In each case, the data association problem is overcome and the first version is clearly superior. As far as the notational conventions are concerned, we always use the index i to refer to one among the M tracked objects. The index j designates one of the mt observations obtained at instant t. The index n is reserved for the N particles denoted by s. The index ? is used for indexing the iterations in the Gibbs Sampler and r is used for the different receivers. Finally, the probability densities are denoted by p is they are continuous and by P if they are discrete. II. THE BASIC PARTICLE FILTER

For the sake of completeness, the basic particle filter is now briefly reviewed. The general principle of sampling it relies on is used throughout the paper. We consider a dynamic system represented by the stochastic process (Xt ) 2 Rnx whose temporal evolution is given by the state equation: Xt = Ft (Xt?1 , V): t We want to estimate the state vector (Xt ) at discrete times with the help of system’s observations which are realizations of the stochastic process (Y ) 2 Rny t governed by the measurement equation: Y = Ht (Xt , W ): t t (2) (1)

The two processes (V) 2 Rnv and (W ) 2 Rnw are only t t supposed to be independent white noises. Note that the functions Ft and Ht are not assumed linear. We will denote by Y : t the sequence of the random 0 variables (Y , : : : , Y ) and by y0 : t one realization of this 0 t sequence. Note that throughout the paper, the first subscript of any vector always refers to the time. Our problem consists in computing at each time t the conditional density Lt = p(Xt j Y = y0 , : : : , Y = yt ) 0 t of the state Xt given all the observations accumulated up to t, and also in estimating any functional g(Xt ) of the state via the expectation E(g(Xt ) j Y : t ). The 0 recursive Bayesian filter, also named optimal filter, resolves exactly this problem in two steps at each time t.

JULY 2002

IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 38, NO. 3

Fig. 1. Basic particle filter without resampling.

Suppose we know Lt?1 . The prediction step is done according to the following equation: p(Xt = xt j Y : t?1 = y0 : t?1 ) 0 Z = p(Xt = xt j Xt?1 = x)Lt?1 (x) dx:

Rnx

(3)

Using (1), we can calculate p(Xt = xt j Xt?1 = x): p(Xt = xt j Xt?1 = x) = Z

Rnv

p(Xt = xt j Xt?1 = x, V = v) t (4)

? p(V = v j Xt?1 = x) dv: t

The observation yt enables us to correct this prediction using Bayes’s rule:

Lt (xt ) = R p(Y = yt j Xt = xt )p(Xt = xt j Y : t?1 = y0 : t?1 ) t 0 :

Rnx

p(Y = yt j Xt = x)p(Xt = x j Y : t?1 = y0 : t?1 ) dx t 0

(6)

Under the specific assumptions of Gaussian noises V t and W and linear functions Ft and Ht , these equations t lead to the Kalman filter’s equations. Unfortunately this modeling is not appropriate in many problems in signal and image processing, which makes the calculation of the integrals in (3) and (6) infeasible (no closed-form). The original particle filter, which is called the bootstrap filter [7], proposes to approximate the densities (Lt )t by a finite weighted sum of N Dirac densities centered on elements of Rnx , which are called particles. The application of the bootstrap filter requires that one knows how to do the following: sample from initial prior marginal p(X0 ); sample from p(V) for all t; t compute p(Y = yt j Xt = xt ) for all t through a t known function lt such that lt (y; x) / p(Y = y j Xt = x) t where missing normalization must not depend on x. The first particle set S0 is created by drawing N independent realizations from p(X0 ) and assigning uniform weight 1=N to each of them. Then, suppose we have at our disposal at time t ? 1 the weighted P n particle set St?1 = (st?1 , qn )n=1,:::,N where N qn n=1 t?1 t?1

= 1. The a posteriori marginal Lt?1 is then estimated P by the probability density LSt?1 = N qn ±sn . n=1 t?1 t?1 The prediction step consists of propagating each particle of St?1 according to the evolution equation (1). The weight of each particle is updated during the correction step. Up to a constant, (6) comes down to adjust the weight of predictions by multiplying it by the likelihood p(yt j xt ). In the most general setting of sequential Monte Carlo methods [16], the displacement of particles is obtained by sampling from an appropriate density f which might depend on the data as well. The general algorithm is summarized in Fig. 1. The density LSt is often multimodal as several hypotheses about the position of the object can be made at one time. It is for instance the case when one object is tracked in the presence of significant clutter. Several hypotheses about the object position can then be kept if the set of particles splits into several subsets. This is where the great strength of this filter lies. In [17] for instance, the measurement vector Y consists of a set of detected features t along line measurements. The assumed underlying generative model affects each feature either to the target boundary, or to its interior or to the background. The likelihood function is built from this generative model and then takes into acccount the clutter model. An extension of the algorithm in Fig. 1, called the hybrid bootstrap filter, has also been proposed to deal with significant clutter and spurious objects for target tracking [12] and guidance [18]. The weighted sum of Dirac laws is then approximated by a Gaussian mixture obtained by a clustering method [19]. The particle sets enable one to estimate any functional of Xt in particular the two first moments with g(x) = x and g(x) = x2 , respectively. The mean can be used to estimate the position of one object but it can be a bad estimator if the posterior is highly multimodal. In such cases the ideal would be to calculate the mean only over the particles that contribute to the principal mode but such an estimator has not been developed for the moment. In practice, the number of particles is finite and the major drawback of this algorithm is the degeneracy of

793

HUE ET AL.: TRACKING MULTIPLE OBJECTS WITH PARTICLE FILTERING

Fig. 2. Basic particle filter with systematic resampling.

Fig. 3. Basic particle filter with adaptive resampling.

the particle set: only few particles keep high weights and the others have very small ones. The former carry the information, whereas the latter are mostly useless. The resampling is a good way to remedy this drawback because it eliminates the particles of smallest weight. The stochastic resampling consists of sampling N particles with replacement in the particle set with the probability qn to draw sn . The new particles have uniform weights equal to 1=N. A first solution, adopted in [7] for example, consists of applying the resampling step at each time period. The corresponding algorithm of particle filter with systematic resampling is described in Fig. 2. To measure the degeneracy of the algorithm, the effective sample size Neff has been introduced in [20, 21]. We can estimate this quantity by P ? Neff = 1= N (qn )2 which measures the number n=1 t of meaningful particles. As advocated in [16], the ? resampling can be done only if Neff < Nthreshold . This enables the particle set to better learn the process and to keep its memory during the interval where no resampling occurs. The algorithm of the basic particle filter with adaptive resampling is

794

described in Fig. 3. Details can be found in [16, 20, 21]. Some convergence results of the empirical distributions to the posterior distribution on the path space have been proved when the number N of particles tends towards infinity [22, 23]. In the path space (Rnx )t+1 , each particle stn at time t can be considered to be a discrete path of length t + 1. Compared with the particle filter presented in Fig. 1, particle filtering in the space of paths consists of incrementing the particle state space at each time step and representing each particle by the concatenation of the new position at time t and the set of previous positions between times 0 and t ? 1. In [22], the fluctuations on path space of the so-called interacting particle systems are studied. In the context of sequential Monte Carlo methods [23], which cover most of the particle filtering methods proposed in the last years, the convergence and the rate of convergence of order 1=N of the average mean square error is proved. Under more restrictive hypotheses, the almost sure convergence is proved as well [23].

JULY 2002

IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 38, NO. 3

Fig. 4. Single target bearings-only experiment of Section III (a) Target and observer trajectories. (b) Bearings simulated at each time period.

III. APPLICATION TO BEARINGS-ONLY PROBLEMS To illustrate the previous algorithms, we first deal with classical bearings-only problems. The object is then a “point-object” in the x-y plane. In the context of a slowly maneuvering target, we have chosen a nearly constant-velocity model (see [24] for a review of the principal dynamical models used in this domain). A. The Model The state vector Xt represents the coordinates and the velocities in the x-y plane: Xt = (xt , yt , vxt , vyt ). The discretized state equation associated with time period ?t is 0 2 1 ? ? ?t I2?2 ?tI2?2 I Xt + @ 2 2?2 A V Xt+?t = t 0 I2?2 ?tI2?2 (7) where I2?2 is the identity matrix in dimension 2 and V is a Gaussian zero-mean vector of covariance t matrix ? 2 ? ?x 0 : §V = 2 0 ?y ? Let Xt be the estimate of Xt computed by the particle P ? ? filters with g(x) = x, i.e., Xt = N qn stn . The posterior n=1 t ? covariance matrix of Xt is Pt =

N X n=1

B. Results of Particle Filters with Systematic and Adaptive Resampling, Respectively The initial position of the target and of the observer are 1 0 0 200 m 200 m B 1500 m C B ?3000 m C B B obs X0 = B X0 = B C @ 1:0 ms?1 A @ 1:2 ms?1 ?0:5 ms?1

1

+0:5 ms?1

C C C: A

The dynamic noise is a normal zero-mean Gaussian vector with ?x = ?y = 0:001 ms?2 . The observer is following a leg-by-leg trajectory. Its velocity vector is constant on each leg and modified at the following instants: ? ? obs ? ? ? ? obs ? ? vx400 vx200 ?0:6 2:0 = = obs obs 0:3 0:3 vy200 vy400 ? obs ? ? ? ? obs ? ? ? vx600 vx800 ?0:6 2:0 = = obs obs 0:3 0:3 vy600 vy800 ? obs ? ? ? vx900 ?0:6 = : obs vy900 0:3 The bearings measurements are simulated with a Gaussian noise of standard deviation ?w = 0:05 rad (about 3 deg) every time period, i.e., every 6 s. The measurements set used and the trajectories of the observer and of the target are presented in Fig. 4. First, we have studied the impact of adaptive or systematic resampling on the estimate posterior covariance defined by (8). We have used bootstrap filters, i.e, the importance function f is in fact the prior law p(Xt j Xt?1 ). The initialization of the filters has been done in both cases according to a Gaussian law whose mean vector is the true vector X0 and covariance matrix Xcov is 1 0 20:02 0 0 0 B 0 1002 0 0 C C B Xcov = B C: @ 0 0 0:052 0 A 0 0 0 0:052

795

? ? ? ? qn (stn ? Xt )(stn ? Xt )T t

(8)

where T denotes the transposition. The observations are available at discrete times according to ? ? y ? ytobs Y = arctan t (9) +W t t xt ? xtobs where W is a zero-mean Gaussian noise of covariance t 2 ?w independent of V, and xobs and yobs are the t Cartesian coordinates of the observer, which are known.

HUE ET AL.: TRACKING MULTIPLE OBJECTS WITH PARTICLE FILTERING

TABLE I Computational Cost for One Iteration on 863 MHz Pentium III Particle Number Systematic Resampling Resampling Procedure 1000 7.9 ms 4 ms 5000 41.9 ms 28 ms 10000 88.5 ms 64 ms

? Fig. 5. Values of effective sample size Neff obtained with 1000 particles and adaptive resampling.

For the filter with adaptive resampling, the threshold Nthreshold is fixed to 0.9. The average frequency of resampling obtained for such a threshold is 0.132. The ? values of Neff are presented in Fig. 5 for one particular run. The results are plotted in Fig. 6. The observer trajectory, the target trajectory, and the estimate obtained with adaptive and sytematic resampling are plotted with the 2? confidence ellipses on position and the line of sight of the observer every hundred times for 1000, 5000, and 10000 particles. The ellipses represent the regions containing 95% of the particles assuming they are Gaussian distributed. At each time, it is interesting to check that the ellipses are oriented according to the line of sight. Moreover with only 1000 particles, the advantage of using adaptive resampling is emphasized: resampling impoverishes the particle set and when applied systematically, the particle set can become so reduced that the ellipse does not contain the true trajectory any more. It is the case from instant 400 in Fig. 6(e) whereas with adaptive resampling in Fig. 6(f) the confidence ellipses still contain the true trajectory. The estimates obtained with basic particle filters using adaptive or systematic resampling are optimal for a infinite particle number. In practice, the particle number is of course finite, and the estimation is determined by the realizations of the random processes used in the proposal. To further assess the accuracy of the two filters, we have performed 100 different runs of the particle filters with systematic resampling on the one hand and adaptive resampling on the other hand, for a given realization of the measurement process.1 From these different runs, we have computed the averaged estimate and the 2? confidence ellipses on position containing 95% of the estimates. They are represented in Fig. 7

is also interesting to compute the particle filter estimation for different realizations of the measurement process, as it is often done to evaluate the performance of deterministic algorithms such as the Kalman filter or the extented Kalman filter. It is done in Section IIIE.

1 It

for N = 1000, 5000, 10000 particles and adaptive or systematic resampling. The averaged estimates are similar but the confidence ellipses are globally twice as small with adaptive than with systematic resampling whatever the particle number. This indicates that adaptive resampling significantly reduces the standard deviation on the particle filter estimates. As for the computational cost, Table I contains the cost for one time iteration with systematic resampling and the time spent on the resampling procedure according to the particle number on a 863 Mhz Pentium III. C. Overestimated State Noise

It is to be noted that in the case of real data, the variance of the state noise is not always known precisely. Some simulations have then been performed with an overestimated state noise. Intuitively, the particles are thus predicted in a larger area but are also more prone to be eliminated in the resampling step. In pratice this is a great strength of particle filtering. Fig. 8 shows the averaged estimates and the 2? confidence ellipses obtained with adaptive resampling, and with 1000 and 5000 particles. The estimation is less smooth than with a correct state noise. This jitterring increases the bias and the standard deviation and more particles are needed for results comparable with those obtained with a correct state noise: with 5000 particles the standard deviation between estimates is drastically lower than with 1000 particles. D. Shifted Initialization We have also performed particle filters runs with a shifted initialization: the initialization is done according to a Gaussian law whose mean vector Xmean and covariance matrix Xcov are 1 0 50:0 m B 500 m C C B Xmean = X0 + B C @ 0:0 ms?1 A B 0 B Xcov = B @ 0 0 40:02 ?0:0 ms?1 0 400:02 0 0 0 0 0:052 0 0 0 C C C: 0 A

JULY 2002

1

(10)

0

0:052

796

IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 38, NO. 3

Fig. 6. Estimates for one particular run (dashed lines) and 2? confidence ellipses obtained with bootstrap filter. Solid lines stand for target and observer trajectories. Dotted lines indicate line of sight of observer every hundred times. Top: 10000 particles, middle: 5000 particles, bottom: 1000 particles. Left column: systematic resampling; right column: adaptive resampling.

After around 200 time periods, the particles have recovered from their shifted initialization and provide satisfactory estimates as shown in

Fig. 9. The ellipses size is considerably reduced with 5000 particles compared to only 1000 particles.

797

HUE ET AL.: TRACKING MULTIPLE OBJECTS WITH PARTICLE FILTERING

Fig. 7. Averaged estimates (dashed lines) and 2? confidence ellipses obtained with bootstrap filter. Top: 10000 particles, middle: 5000 particles, bottom: 1000 particles. Left column: systematic resampling: right column: adaptive resampling.

E. Comparison with a Modified Polar Coordinate Extended Kalman Filter We have finally compared particle filtering methods with a linearized filter, the modified

798

polar coordinate extended Kalman filter (MP-EKF). Let us denote by XtC , XtC,Rel and XtMP the state vector in the fixed Cartesian system, in the Cartesian system relative to the observer, and in the modified polar system, respectively

JULY 2002

IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 38, NO. 3

Fig. 8. Averaged estimates (dashed lines) and 2? confidence ellipses obtained with bootstrap filter with an overestimated state noise. (a) 1000 particles. (b) 5000 particles.

Fig. 9. Averaged estimates (dashed lines) and 2? confidence ellipses obtained with bootstrap filter with shifted initialization. (a) 1000 particles. (b) 5000 particles.

_ _ XtC = (xt , yt , xt , yt ), _ _ _ _ XtC,Rel = (xt ? xtobs , yt ? ytobs , xt ? xtobs , yt ? ytobs ) and ! _ T 1 _ R MP Xt = ?, , ?, R R ! ? ? 1 xtC,Rel = arctan C,Rel , q , yt xtC,Rel 2 + ytC,Rel 2 _ _ ytC,Rel xtC,Rel ? xtC,Rel ytC,Rel xtC,Rel 2 + ytC,Rel 2 _ _ xtC,Rel xtC,Rel + ytC,Rel ytC,Rel xtC,Rel 2 + ytC,Rel

2

MP C Let fC and fMP be the functions such that: MP XtMP = fC (XtC,Rel )

and

C XtC,Rel = fMP (XtMP ): (11)

?

It must be noted that these two functions are nonlinear. The dynamic model in the Cartesian system is defined by

C Xt+?t = FXtC + V t

(12) ?

where F=

?

I2?2 0

?tI2?2 I2?2

, !T

and V is a Gaussian noise of constant covariance t matrix §V . In the MP system it reads: :

MP MP C obs Xt+?t = fC (FfMP (XtMP ) ? Ut+?t + V) t

(13)

799

HUE ET AL.: TRACKING MULTIPLE OBJECTS WITH PARTICLE FILTERING

Fig. 10. Mean estimates (dotted lines) obtained over 10 realizations of measurement process and 2? confidence ellipses with shifted initialization. (a) Estimates obtained with MP-EKF. (b) Estimates obtained with particle filters with adaptive resampling and 5000 particles.

obs obs obs _ where Ut+?t = (xt+?t ? xtobs ? ?t xtobs , yt+?t ? ytobs ? obs _ obs obs _ obs obs T _ _ _ ?t yt , xt+?t ? xt , yt+?t ? yt ) . If the observer follows a leg-by-leg trajectory, the two first components of Utobs are always null. The two last components are null except if t is a maneuver time. In the Cartesian system, the measurement equation is defined by Y = HXtC + W (14) t t 2 where W is a Gaussian noise of covariance ?w . In the t MP system, it reads C C where JMP is the Jacobian matrix of fMP taken around the estimated state. More details on the MP-EKF can be found in [25—27]. As the MP-EKF is a determinist algorithm, we have computed the estimates obtained for several realizations of the measurement process. For a given realization, the MP-EKF has to be compared with the mean of the particle filter estimates over several runs. We have compared the performance in the case of a shifted initialization like in (10) in the previous section over 10 realizations of the measurement process. For each of them, 20 runs of the particle filter with adaptive resampling and 5000 particles are computed. The final bias and standard deviation are similar but the particle filter recover faster from the shifted initialization than the MP-EKF (see Fig. 10).

Y = QXtMP + W t t

(15)

where Q = (1 0 0 0). Let us now describe the propagation and updating steps of the MP-EKF used. The predicted state is given by

MP C ? obs ? MP Xt+?t = fC (FfMP (XtMP ) ? Ut+?t ):

(16)

IV. MULTITARGET PARTICLE FILTERS The purpose of the two precedent sections was to provide a general representation of particle filtering. We can now turn to its extension to MTT. A. MTT Problem and Its Classical Treatment Let M be the number of targets to track. This number is assumed to be known and fixed for the moment (the case of a varying unknown number will be adressed in another work). The index i designates one among the M targets and is always used as first superscript. Multitarget tracking consists in estimating the state vector made by concatenating the state vectors of all targets. It is generally assumed that the targets are moving according to independent Markovian dynamics. At time t, Xt = (Xt1 , : : : , XtM ) follows the state equation (1) decomposed in M partial equations:

i Xti = Fti (Xt?1 , Vi ) t

The predicted covariance matrix in the Cartesian system is ?C ? Pt+?t = F Pt C F T + §V (17) and in the MP system it is

MP ? MP ? MP Pt+?t = JC Pt C (JC )T

(18)

MP MP where JC is the Jacobian matrix of fC taken around the predicted state. As the measurement equation is linear in the MP system, the updating step is the same as in the classical Kalman filter: ? ? K = P MP QT (QP MP QT + ?2 )?1 (19) t+?t t+?t w

? MP ? MP ? MP Xt+?t = Xt+?t + K(Y t ? QXt+?t ) t+?

2 ? MP ? MP Pt+?t = (I ? KQ)Pt+?t (I ? KQ)T + K?w K T

(20)

? MP ? MP = Pt+?t ? KQPt+?t

C ? MP C ?C Pt+?t = JMP Pt+?t (JMP )T

(21) (22)

8 i = 1, : : : , M:

(23)

800

IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 38, NO. 3

JULY 2002

The noises (Vi ) and (Vi0 ) are supposed only to be t t white both temporally and spatially, and independent for i 6= i0 . The observation vector collected at time t is denoted by yt = (yt1 , : : : , ytmt ). The index j is used as first superscript to refer to one of the mt measurements. The vector yt is composed of detection measurements and clutter measurements. The false alarms are assumed to be uniformly distributed in the observation area. Their number is assumed to arise from a Poisson density of parameter ?V where V is the volume of the observation area and ? the number of false alarms per unit volume. As we do not know the origin of each measurement, one has to introduce the vector Kt to describe the associations between the measurements and the targets. Each component Ktj is a random variable that takes its values among f0, : : : , Mg. Thus, Ktj = i indicates that ytj is associated with the ith target. In this case, ytj is a realization of the stochastic process: Yj = Hti (Xti , Wj ) t t if Ktj = i: (24)

exponentially with time. Some pruning solutions must be found to eliminate some of the associations. The JPDAF begins with a gating of the measurements. Only the measurements which are inside an ellipse around the predicted state are kept. The gating assumes that the measurements are distributed according to a Gaussian law centred on the predicted state. Then, the probabilities of each association Ktj = i are estimated. As the variables Ktj are assumed dependent by (A2), this computation requires the exhaustive enumeration of all the possible associations Ktl for l 6= j. The novelty in the PMHT algorithm [3—5] consists of replacing the assumption (A2) by (A3). A3. One target can produce zero or several measurements at one time. This assumption is often criticized because it does not match the physical reality. However, from a mathematical point of view it ensures the stochastic independence of the variables Ktj and it drastically reduces the complexity of the ?t vector estimation. The assumptions (A1) and (A3) are kept in the joint filters presented later. Let us present now the existing works solving MTT with particle filtering methods. B. Related Work: MTT with Particle Filters In the context of MTT, particles filters are appealing: as the association needs only to be considered at a given time iteration, the complexity of data association is reduced. First, two extensions of the bootstrap filter have been considered. In [11], a bootstrap-type algorithm is proposed in which the sample state space is a “(multitarget) state space.” However, nothing is said about the association problem that needs to be solved to evaluate the sample weights. It is in fact the ability of the particle filtering to deal with multimodality due to (high) clutter that is pointed out compared with deterministic algorithms like the nearest neighbor filter or the PDA filter. No examples with multiple targets are presented: the simulations only deal with a single target in clutter with a linear observation model. In [12], a hybrid bootstrap filter is presented where the particles evolve in a single-object state space. Each particle gives a hypothesis on the state of one object. Thus, the a posteriori law of the targets given the measurements is represented by a Gaussian mixture. Each mode of this law then corresponds to one of the objects. However, as pointed out in [12], the likelihood evaluation is possible only under the availability of the “prior probabilities of all possible associations between” the measurements and the targets. Even if the likelihood could be evaluated, the way to represent the a posteriori law by a mixture can lead to the loss

801

Again, the noises (Wj ) and (W j0 ) are supposed t t only to be white noises, independent for j 6= j 0 . We assume that the functions Hti are such that they can be associated to functional forms lti such that lti (y; x) / p(Yj = y j Ktj = i, Xti = x). t We dedicate the model 0 to false alarms. Thus, if Ktj = 0, the jth measurement is associated to the clutter, but we do not associate any kinematic model to false alarms. As the indexing of the measurements is arbitrary, all the measurements have the same a priori probability to be associated with a given model i. At time t, these association probabilities define the vector ?t = (?t0 , ?t1 , : : : , ?tM ) 2 [0, 1]M+1 . Thus, for ? i = 1, : : : , M, ?ti = P(Ktj = i) for all j = 1, : : : , mt is the discrete probability that any measurement is associated with the ith target. To solve the data association some assumptions are commonly made [28]. A1. One measurement can originate from one target or from the clutter. A2. One target can produce zero or one measurement at one time. The assumption (A1) expresses that the P association is exclusive and exhaustive. Consequently, M ?ti = i=0 1. The assumption (A2) implies that mt may differ from M and above all that the association variables Ktj for j = 1, : : : , mt are dependent. Under these assumptions, the MHT algorithm [1] builds recursively the association hypotheses. One advantage of this algorithm is that the appearance of a new target is hypothesized at each time step. However, the complexity of the algorithm increases

HUE ET AL.: TRACKING MULTIPLE OBJECTS WITH PARTICLE FILTERING

of one of the targets during occlusions. The particles tracking an occluded target get very small weights and are therefore discarded during the resampling step. This fact has been pointed out in [15]. In image analysis, the condensation algorithm has been extended to the case of multiple objects as well: in [13], the case of two objects is considered. The hidden state is the concatenation of the two single-object states and of a binary variable indicating which object is closer to the camera. This latter variable solves the association during occlusion because the measurements are affected to the foreground object. Moreover, a probabilistic exclusion principle is integrated to the likelihood measurement to penalize the hypotheses with the two objects overlapping. In [14], the state is composed of an integer equal to the number of objects and of a concatenation of the individual states. A 3-D representation of the objects gives access to their depth ordering, thus solving the association issue during occlusions. Three other works combine particle filtering with the JPDA. In mobile robotic [15], a particle filter is used for each object tracked. The likelihood of the measurements is written like in a JPDAF. Thus, the assignment probabilities are evaluated according to the probabilities of each possible association. Given these assignment probabilities, the particle weights can be evaluated. The particle filters are then dependent through the evaluation of the assignment probabilities. The algorithms presented in [29, 30] are both applied to target tracking. The state space of each particle is the concatenation of the state space of each target and the likelihood of the measurements given a particle is derived according to the JPDAF. Independently of the latter works [14, 15, 29, 30], we have developed the joint filters where the data association is approached in the same probabilistic spirit as the basic PMHT [3, 4]. First, to estimate the density Lt = p(Xt = (Xt1 , : : : , XtM ) j Y : t = y0 :t ) with particle filtering 0 methods we must choose the state space for the particles. We could use particles of Xti s dimension and distribute them among the objects (providing that the objects have the same dimension which might not be the case for complex objects in image processing). Each of them would then be known to represent a given object. The weight of particle n is then: qn / p(Y = (yt1 , : : : , ytmt ) j Xti = stn ): t t (25)

produces at least one measurement (to be able to use the Total Probability Theorem), which seems too restrictive. Moreover, as mentioned before, a unique particle filter with a single-target state space seems inappropriate as the particles tracking an occluded object would be quickly discarded. We have considered using one particle filter per object but without finding a consistent way to make them dependent. The stochastic association vector Kt introduced in Section IVA could also be considered as an additional particle component. However, as the ordering of the measurements is arbitrary, it would not be possible to devise a dynamic prior on it. Moreover, the state space would increase further making particle filter less effective. Finally, we have chosen to use particles whose dimension is the sum of those of the individual state spaces corresponding to each target, as in [13, 14]. Each of these concatenated vectors then gives jointly a representation of all targets. We will call the filters associated with this representation the joint filters. C. MTPF Algorithms

Let us first study the joint density p(Xt , Y , ?t , Kt ) t (implicitly conditioned on Xt?1 ) where ?t is the stochastic variable associated with the probability ?t . We drop the subscript t for a few lines p(X, Y, ?, K) = p(Y j K, ?, X)p(K j ?, X)p(?, X)

(26)

= p(Y j K, X) p(K j ?) p(?)p(X) (27) where the last equality is based on the following independence assumptions. 1) Y and ? are independent given K and X. 2) the association vector K is independent from the state vector X given ?. The factors in (27) write: 3)

p(Y j K, X) = =

Y

j

p(Yj j K j , X) with i = K j 2 f1 : : : Mg

Y ? li (y j j xi )

j

1=V

if K j = 0:

As the observations cannot be supposed independent conditioned on only one of the M objects, the likelihood cannot be decomposed in the product of likelihoods of each observation. One solution could be to use Bayes’s rule and to compute p(Xti = stn j Y = (yt1 , : : : , ytmt )), but this requires that every object t

802

4) To simulate according to p(K j ?) it is sufficient ? to generate N(K) where N i (K) = ]fj : K j = ig. The vector (N 0 (K), : : : , N M (K)) follows a multinomial law of size mt and of parameters (? 0 , : : : , ? M ). 5) p(?) = p(? 0 )p(? 1 , : : : , ? M j ? 0 ) where p(? 1 , : : : , ? M j ? 0 ) is uniform on the [0, 1 ? ?0 ]M s P hyperplane M ?i = 1 ? ?0 . Moreover, ?0 is a i=1 constant that can be computed:

JULY 2002

IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 38, NO. 3

Fig. 11. Independence graph of stochastic variables X, Y, K, ?.

?0 = P(Ktj = 0) =

mt X l=0

(28) (29)

P(Ktj = 0 j Nt0 = l)P(Nt0 = l)

mt X l (?V)l exp(??V) = : mt l! l=0

(30)

Assuming that there are l measurements due to clutter among the mt measurements collected at time t, the a priori probability that any measurement comes from the clutter is equal to l=mt hence the equality P(Ktj = 0 j Nt0 = l) = l=mt used to derive (30) from (29). 6) p(X), i.e., p(Xt j Xt?1 ) conditional on Xt?1 . This independence structure of this distribution is illustrated in Fig. 11. n The initial particle set S0 = (s0 , 1=N)n=1,:::,N is such n,i that each component s0 for i = 1, : : : , M is sampled i from p(X0 ) independently from the others. Assume n we have obtained St?1 = (st?1 , qn )n=1,:::,N with t?1 PN n n=1 qt?1 = 1. Each particle is a vector of dimension PM i n,i i=1 nx where we denote by st?1 the ith component n i of st?1 and where nx designates the dimension of object i. The prediction can be done according to the following equation 0 1 n,1 n,1 1 Ft (st?1 , vt ) B C . B C . . B C n ? st = B for n = 1, : : : , N: C . B C . @ A .

n,M FtM (st?1 , vtn,M )

(31)

Examine now the computation of the likelihood of the observations conditioned on the nth particle. We can write for all n = 1, : : : , N: ? p(Y = (yt1 , : : : , ytmt ) j Xt = stn ) t =

mt Y j=1

? p(ytj j stn )

(32)

" # mt M Y ?0 X i j ? n,i i t / lt (yt ; st )?t : + V

j=1 i=1

(33)

It must be noted that (32) is true only under the assumption of conditional independence of the measurements, which we will make. Moreover, the normalization factors between lti and p(Yj = y j Ktj = t i, Xti = x) must be the same for all i to write the last equality (33). We still need to estimate at each time step the association probabilities (?ti )i=1,:::,M , which can be seen as the stochastic coefficients of the M-component mixture. The simplest way would consist in estimating the ? vector from the problem parameters like the number of targets M, the obtained number of measurements mt and the clutter density ?. Assuming, the ?ti are equal for all i 6= 0, we can empirically estimate ?ti = (mt ? ?V)=M. However, these estimates do not take into account the current measurements. In particular, if a target is not detected during a time period, we would like the associated ? component to be smaller than the others. For that we must incorporate information about the adequation between the measurements and the estimated targets in the ? estimation procedure. Two main ways have been found in the literature to estimate the parameters of such a mixture: the EM method (and its stochastic version, the SEM algorithm [31]) and the Data Augmentation method. The second one amounts in fact to a Gibbs sampler. In [3—5] the EM algorithm is extended and applied to multitarget tracking. This method implies that the vectors ?t and Xt are considered as parameters to be estimated. The maximization step can be easily conducted in the case of deterministic trajectories. In case of nondeterministic trajectories a maximum a priori (MAP) estimation is required to complete the M step. Yet, the nonlinearity of the state and observation functions makes this step very difficult. Finally, the estimation is done iteratively in a batch approach which we would like to avoid. For these reasons, we have not chosen an EM algorithm to estimate the association probabilities. The Data Augmentation algorithm is quite different in its principle. The vectors Xt , Kt , and ?t are considered to be random variables with prior densities. Samples are then obtained iteratively from their joint posterior using a proper MCMC technique, namely the Gibbs sampler. This method has been studied in [32—36] for instance. It can be run sequentially at each time period. Gibbs sampler is a special case of the Metropolis-Hasting algorithm with the proposal densities being the conditional distributions, and the acceptance probability being consequently always equal to one. The interested reader can refer to [37] for an introduction to Markov chain Monte Carlo simulation methods and also for a presentation of the EM algorithm. For ?t = (Xt , Kt , ?t ), the method consists in generating a Markov chain that converges to the

803

HUE ET AL.: TRACKING MULTIPLE OBJECTS WITH PARTICLE FILTERING

stationary distribution p(?t j Y : t ) which cannot be 0 sampled directly. For that, we must be able to get a partition ?t1 , : : : , ?tP of ?t , and to sample alternatively from the conditional posterior distribution of each component of the partition. Let the index ? denote the iterations in the Gibbs sampler. The second subscript of the vectors refers to the iteration counter. Assume the ? + 1 first elements of the Markov chain (?t,0 , : : : , ?t,? ) have been drawn. We sample the P components of ?t,? +1 as follows:

Draw Draw

1 ?? +1 2 ?? +1

1 M 2) Mixture proportion vector ?t,?:+1 is drawn from the conditional density:

p(?t1 : M j Kt,? +1 , Xt,? , Y :t ) 0

1 M = p(?t1 , : : : , ?tM j Kt,? +1 , : : : , Kt,? +1 , Xt,? , Y : t ) 0 1 M = p(?t1 , : : : , ?tM j Kt,? +1 , : : : , Kt,? +1 )

(38) (39)

from from

2 P p(?1 j Y : t , ?? , : : : , ?? ) 0

/ Dirichlet(?t1 : M j f1 + N i (Kt,? +1 )gi=1,:::,M ) (40) where we denote by N i (K) the number of k j equal to i and where Dirichlet(±1 , : : : , ±M ) denotes the Dirichlet distribution on the simplex f(?t1 , : : : , ?tM?1 , 1 ? ?t1 ? ? ? ? ? ?tM?1 ) : ?t1 + ? ? ? + ?tM?1 · 1g with density M?1(±M?1 ?1) proportional to ?t1(±1 ?1) ? ? ? ? ? ?t ? (1 ? ?t1 ? M?1 ±M ?1 1 :M ? ? ? ? ?t ) . The vector ?t,? +1 is first drawn according to (40) and then normalized to restore the P i sum M ?t,? +1 to 1 ? ?t0 . i=1 3) Xt,? +1 has to be sampled according to the density p(Xt j Y :t , Kt,? +1 , ?t,? +1 ) = 0

M Y i=1

1 3 P p(?2 j Y : t , ?? +1 , ?? , : : : , ?? ) 0

. . .

. . .

Draw

P ?? +1

from

P?1 1 p(?P j Y :t , ?? +1 , : : : , ?? +1 ): 0

The initialization of the Gibbs sampler consists of assigning uniform association probabilities, i.e., i ?t,0 = (1 ? ?t0 )=M for all i = 1, : : : , M, and taking P ? Xt,0 = N qn stn , i.e., the centroid of the predicted n=1 t?1 particle set. The Kt variables do not need initializing because at the first time step of the Gibbs sampler i they will be sampled conditioned on ?t,0 , i = 1, : : : , M and Xt,0 . Then, suppose that at instant t we have already simulated (?t,1 , : : : , ?t,? ). The ? + 1th iteration is handled as follows. (Ktj )j=1,:::,mt

The proof of the convergence of the Markov chain (?? )? is outlined in the Appendix. We propose to use it in the particle filter extended to multiple targets. In our case, at a given instant t, we follow this approach with 8 j j for j = 1, : : : , mt > ? = Kt < mt +i i (34) ? = ?t for i = 1, : : : , M > : mt +M+i i ? = Xt for i = 1, : : : , M:

p(Xti j Y : t , Kt,? +1 , ?t,? +1 ): 0

(41) The values of Kt,? +1 can imply that one object is associated with zero or several measurements. Let us j define Mi +1 = fj 2 [1 : : : mt ] : Kt,? +1 = ig. Hence we t,? decompose the preceding product in two products: Y Mi p(Xti j Y : t?1 , yt t,? +1 , ?t,? +1 ) 0

i : Mi +1 6=? t,?

?

Mi

i : Mi +1 = t,?

Y

?

p(Xti j Y : t?1 , ?t,? +1 ) 0

(42)

+ 1) As the are supposed to be independent, their individual conditional density reads p(Ktj j Y : t , Xt , (Ktl )l6=j , ?t ) = p(Ktj j Yj , Xt , ?t ): t 0 (Ktj ) are discrete variables and we can write:2 P(Ktj = i j Yj = ytj , Xt , ?t ) t = p(Yj = ytj j Ktj = i, Xt , ?t )P(Ktj = i j Xt , ?t ) t p(Yj = ytj j Xt , ?t ) t (36) (37) ?

where yt t,? +1 = fytj , j 2 Mi +1 g. The first product t,? contains the targets that are associated to at least one measurement under the association Kt,? +1 . In this case, the measurements are denoted by yt t,? +1 . The second product contains the targets that are associated with no measurements under Kt,? +1 . Let i be an integer in the first product. We propose two approaches to sample Xt,? +1 1) Without making any additional assumption we can write p(Xti j Y :t?1 , yt 0 = p(yt

Mi +1 t,? Mi

(35)

/

?ti lti (ytj ; xti ) ?t0 =V

if i = 1, : : : , M if i = 0:

, ?t,? +1 ) : (43)

Mi +1 t,?

j The realizations kt,? +1 of the vector Kt,? +1 are then sampled according to the weights pj,0+1 = t,? i i ?t0 =V, pj,i +1 = ?t,? lti (ytj ; xt,? ) for i = 1, : : : , M. t,?

2 Using

p(yt

Mi +1 t,?

j Xti )p(Xti j Y : t?1 ) 0 j Y : t?1 ) 0

We are not able to sample directly from the density p(yt

Mi +1 t,?

Bayes’s rule, p(a j b, c) = p(b j a, c)p(a j c)=p(b j c).

p(yt

Mi +1 t,?

j Xti )p(Xti j Y : t?1 ) 0 j Y : t?1 ) 0

JULY 2002

804

IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 38, NO. 3

Fig. 12. Particle filter for multiple objects with adaptive resampling.

P ? As the predicted empirical law LSt = N qt?1 stn,i is ? n=1 i ), we expect “close” from the predicted law p(Xt j Y P 0:t?1 the empirical distribution ¤? +1 = N ?n+1 ±?n to n=1 ? convergence of ¤? +1 to p(Xti j yt t,? +1 , Y : t?1 ) when N 0 tends towards infinity remains to be proved. Not being i able to sample from this last density, Xt,? +1 is drawn as a realization from ¤? +1 . 2) The second solution assumes the measurement equation enables us to sample from the density P(Xt = x j Y = y) and to forget the observations from t Mi the past. The likelihood p(Xti j Y : t?1 , yt t,? +1 , ?t,? +1 ) is 0 are independent but we use the observations 1 Mi their centroid y and replace p(Xti = x j yt t,? +1 , Ktj = i i, : : : , Ktj = i) by p(Xti = x j y). As far as the complexity of these two solutions is concerned, it is to be noticed that the first one depends linearly on the total number of particles whereas the second is independent of it. On the other hand, the second solution requires the ability to sample from p(Xt j Y ). t Now let i be an integer in the second product. As we do not have any measurement to correct the predicted particles we draw a realization from the P i density N qn ±stn for Xt,? +1 . After a finite number n=1 t?1 ? of iterations, we estimate the vector ?t by the average of its last realizations: then reduced to p(Xti j yt

Mi yt t,? +1 Mi +1 t,?

for the same reasons as those exposed in Section II to justify the use of the particle filter (intractability of the integrals). A first solution consists in building the n particle set §? +1 = (?? +1 , ?n+1 )n=1,:::,N whose weights ? n ?? +1 measure the likelihood of the observations affected by Kt,? +1 to object Xti . More precisely, we let 8 n ?n,i > ?? +1 = st > < Mi n (44) p(yt t,? +1 j Xti = ?? +1 )qn t?1 > ?n = : > ? +1 P i : Mt,? +1 N n j Xti = ?? +1 )qn n=1 p(yt t?1

? ?ti =

?end X 1 ?i : ?beg ? ?end ? =? t,?

beg

(45)

be close to p(Xti j yt

Mi +1 t,?

? +1

, Y : t?1 ). However the weak 0

Mi

). We do not assume that

Finally the weights can be computed according to ? (33) using the estimate ?ti of ?ti . By construction, ? ?ti follows the law p(?ti j Y : t ). Thus, the use of a 0 Gibbs sampler enables to take into account the current measurements. Consequently the estimates measure in a way the a posteriori detection probability of each target. It improves the estimation of the targets because the measurements contribute to the estimation ? proportionally to these probabilities ?t . Moreover, the a priori probability of detecting a target, which is usually denoted by Pd is not needed in the MTPF. This probability is needed when the associations are considered in classical algorithms like the PMHT or the JPDAF. The resampling step is performed in an adaptive ? way when the estimated effective sample size Neff is under the threshold Nthreshold . Due to the estimation of the ?t vector needed for the computation of the particles likelihood, the convergence of the MTPF could be affected. It could be interesting to evaluate the error on the estimate of Xt implied by the error made on the estimate of ?t . This is not addressed in this work. Figs. 12 and 13 summarize the whole algortihm. Before presenting the results of some simulations, let us detail the approach used in [29, 30] which is the closest to our approach. D. SIR-JPDA Algorithm The initialization of the particle set is done in the same way as for the joint filters and the prediction step as well, according to (31). The likelihood is then computed in a different way: ? p(Y = (yt1 , : : : , ytmt ) j Xt = stn ) t X ? = p(Y = (yt1 , : : : , ytmt ) j Xt = stn , Ktu )p(Ktu ) t

all associations

(46)

805

HUE ET AL.: TRACKING MULTIPLE OBJECTS WITH PARTICLE FILTERING

Fig. 13. Particle filter algorithm for multiple objects with adaptive resampling.

=

all associations j=1

X

mt Y

? p(ytj j stn , Ktu )p(Ktu )

(47)

where Ktu denotes the uth association hypothesis at time t. The associations between the measurements and the targets are established under the assumptions (A1)—(A2) exposed in Section IVA. The a priori probability of this association is: p(Ktu ) = Y u Y ?u ! u pF (?u ) P D (i) (1 ? Pd )1?D (i) d mt !

i=1 i=1 M M

receivers. Let R be their number. We can easily adapt the particle filter to this situation. We always consider that the M targets (their number is fixed again) obey the state equation (23). Some useful notations must be added to modify the measurement equations. The observation vector at time t will be denoted by yt = m 1 (yt,r1 , : : : , yt,rtmt ) where rj refers to the receiver which received the jth measurement. This measurement is then a realization of the stochastic process:

i Yj j = Ht,rj (Xti , W j ) t t,r

if Ktj = i:

(49)

(48) where ?u is the number of false alarms in Ktu , pF (?u ) the probability to have ?u false alarms, Pd the probability for a target to be detected, and D u (i) = f1, 0g depending on whether target i is detected or not. E. Extension to Multireceiver Data A natural extension of the MTPF is to consider that observations can be collected by multiple

806

We assume the independence of the observations collected by the different receivers. We denote by i lt,rj (y; x) the known functions which are proportional to p(Yj j = y j Ktj = i, Xti = x). The likelihood of the t,r observations conditioned on the nth particle is readily obtained:

m 1 p(Y = (yt,r1 , : : : , yt,rtmt ) j Xt = stn ) t

=

mt Y j=1

j p(yt,rj j stn )

(50)

IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 38, NO. 3

JULY 2002

Fig. 14. Three targets bearings-only experiment in Section V (a) Trajectories of three targets and of observer. (b) Simulated bearings at each time with detection hole for one target between times 600 and 700.

" # mt M Y ?0 X j i t / lt,rj (yt,rj ; stn,i )?ti : + V

j=1 i=1

(51)

There is no strong limitation on the use of the particle filter for multireceiver and MTT. The MRMTPF is obtained from the MTPF by replacing the likelihood functions lti (y; x) by the functions i lt,rj (y; x). Moreover it can deal with measurements of varied periodicities. V. APPLICATION TO BEARINGS-ONLY PROBLEMS The following multitarget scenario has been considered for illustrating our algorithm. Three targets follow a near-constant-velocity model defined by (7) with ?x = ?y = 0:0005 ms?2 . The initial positions and velocities of the targets and of the observer are the following: 1 1 0 0 200 m 0m B 1500 m C B 0m C C C B B 1 2 X0 = B X0 = B C C @ 1 ms?1 A @ 1 ms?1 A ?0:5 ms?1 1 0 ?200 m B ?1500 m C C B 3 X0 = B C @ 1 ms?1 A 0:5 ms?1 B ?2500 m C C B obs X0 = B C: @ ?0:5 ms?1 A 0 ms?1 0 0 ms?1 3500 m 1

The trajectories of the three targets and of the observer are plotted in Fig. 14(a). The targets produce one measurement at each time period according to (9) with ?w = 0:02 rad except during the time interval [600 700] where the first object does not produce any measurement. The simulated bearings are plotted in Fig. 14(b). As soon as the difference between two bearings issued from two different targets is lower than the standard deviation of the observation noise, the two measurements cannot be distinguished, which makes the data association of this scenario very difficult. This difficulty is increased by the detection hole for the first object. We compare the estimated trajectories when the assignment probabilities are, respectively, estimated by the two versions of the Gibbs sampler in Section IVC. In both cases, the particle filters have been initialized according to a Gaussian law whose mean vector Xmean and covariance matrix Xcov are 0 200 m 1

B 200 m C C B 1 1 Xmean = X0 + B C ?1 A @ 0 ms B ?200 m C C B 2 2 Xmean = X0 + B C @ 0 ms?1 A B 200 m C C B 3 3 Xmean = X0 + B C @ 0 ms?1 A B 0 B i Xcov = B @ 0 0 0 0 ms?1 0 2002 0 0 2002 0 0 0:052 0 0 0 C C C 0 A (54)

807

0

0 ms?1

?200 m

1 (53)

The observer is following a leg by leg trajectory. Its velocity vector is constant on each leg and modified as follows: ? ? obs ? ? ? ? obs ? ? vx100 vx30 1:2 ?3:0 = = obs obs 0:3 0:3 vy30 vy100 ? obs ? ? ? ? obs ? ? ? vx200 vx500 1:2 ?3:0 = = obs obs 0:3 0:3 vy200 vy500 ? obs ? ? ? ? obs ? ? ? (52) vx600 vx800 1:2 ?4:0 = = obs obs 0:3 0:3 vy600 vy800 ? obs ? ? ? vx900 1:2 = : obs vy900 0:3

0

0 ms?1

?100 m

1

1

0:052

for i = 1, 2, 3:

HUE ET AL.: TRACKING MULTIPLE OBJECTS WITH PARTICLE FILTERING

Fig. 15. Averaged estimates (dotted lines) and 2? confidence ellipses obtained with 1000 particles. (a) Estimation of ?t with first version of Gibbs sampler. (b) Estimation of ?t with second version of Gibbs sampler.

? ? ? Fig. 16. Estimated components of vector ?t obtained with first version of Gibbs sampler and 1000 particles. (a) ?t1 . (b) ?t2 . (c) ?t3 .

? ? ? Fig. 17. Estimated components of vector ?t obtained with second version of Gibbs sampler and 1000 particles. (a) ?t1 . (b) ?t2 . (c) ?t3 .

The burn-in period of the Gibbs sampler has been fixed to Nbeg = 100 and the total amount of iterations to Nend = 500. The averaged estimates with the associated 2? confidence ellipses obtained with 1000 particles and adaptive resampling are presented in Figs. 15(a) and 15(b) using, respectively, the first and the second version of the Gibbs sampler to estimate the vector ?t . The two plots show that the data association is overcome in the two versions of the algorithm. There is no trajectory reversal and the estimates are quite satisfactory for the two versions. The confidence ellipses increase when the simulated bearings are very close, i.e., especially between times 450 and

808

600. After this time period, their size stabilizes. The obtained estimates are very similar for both versions of the algorithm. However, the estimation of the ?t vector is much more satisfactory for the first version. Figs. 16 and 17 show the results of the estimation of the three components of ?t , respectively, with the first and the second versions of the algorithm. Fig. 18 represents the average of each component ?ti over successive intervals of 100 time steps and over the 20 trials. At time 100 u, the average over the time interval [100 u 100(u + 1)] and over the 20 trials is plotted for each component. When there is an ambiguity about the origin of the measurements (i.e., when the difference between the bearings is

JULY 2002

IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 38, NO. 3

Fig. 18. Average of estimated components of vector ?t over consecutive ten time intervals of length 100 and over 20 trials. (a) First version. (b) Second version of Gibbs sampler. Averaged estimates marked with “+” for ? 1 , “0” for ? 2 , and “¤” for ? 3 .

lower than the standard deviation noise), the first version makes ? vary in average around 1=3 for M = 3 objects and produces uniform estimation (1=3 for M = 3 objects) when the ambiguity disappears (before time 400 or after time 700 for instance). Second, the momentary measurement gap for the first object is correctly handled: the first component ?t1 is instantaneously estimated as 0.26 in average from instant 600 to 700 and the second and third components as 0.37. The obtained estimates with the second version of Gibbs sampler are much less satisfactory. First, the measurement gap for the first object is not detected by this version as the ?t1 estimate does not decrease during the time interval [600 700]. Second, as shown in Fig. 18(b), the averaged estimates diverge from instant 100 from the expected value 1/3, whereas all the targets are detected at that time. The expected superiority of the first algorithm seems to hold in practice, with a better estimation of ?t . However, one can be surprised that the bad estimation of ?t obtained with the second algorithm has no impact on the estimation of the targets. On this subject, it must be noticed that during the hundred first instants, the vector ?t is correctly handled by both versions. Moreover the particles recover very quickly from their shifted intialization. Thus, from time 100, as the dynamic noise on the targets is low enough, the predicted particles are all “good.” Moreover, during the time interval [400 700], the targets produce very close bearings. Consequently, the likelihood lti (ytj ; xti ) of a measurement given a target is almost independent from the target index: lti (ytj ; xti ) ' l for P P ? ? all i and then M lti (ytj ; xti )?ti ' l M ?ti = l whatever i=0 i=0 ? the ?t estimates. The estimation quality of ?t has then little impact on the likelihood of the particles during this time interval. As for the computational cost, it

takes almost 450 ms per iteration with N = 1000 particles, Nbeg = 100 and Nend = 500 on a 863 Mhz Pentium III. VI. CONCLUSION The MTT has been investigated in the framework of particle filtering and Gibbs sampling. Target state vectors and association probabilities are estimated jointly without enumeration, pruning or gating, by means of particle sets representing the joint a posteriori law of the target states. Two versions, derived from a common formalism, have been considered. This framework is sufficiently versatile to handle a wide variety of situations like MTT for multireceivers, including nonlinear models. APPENDIX. GIBBS SAMPLER By construction, (?? )? 2N is a Markov chain. Let us prove it admits the stationary distribution ? = p(? j Y : t ). 0 For that, let us prove that if ?? is distributed according to ? then ?? +1 is also distributed according to ?. 1 P Assume ?? = (?? , : : : , ?? ) ? ?. Then, for all (z 1 , : : : , z P ):

1 2 P p(?? +1 = z 1 , ?? = z 2 , : : : , ?? = z P ) 1 2 P = p(?? +1 = z 1 j ?? = z 2 , : : : , ?? = z P ) 2 P ? p(?? = z 2 , : : : , ?? = z P )

= p(?1 = z 1 j Y : t , ?2 = z 2 , : : : , ?P = z P ) 0 ? p(?2 = z 2 , : : : , ?P = z P j Y : t ) 0 = p(?1 = z 1 , : : : , ?P = z P j Y :t ): 0 (55) The second equality is obtained from the first in 1 2 (55) as, by construction, ?? +1 ? p(?1 j ?? : P , Y : t ). 0 1 2 P (?? +1 , ?? , : : : , ?? ) is then distributed according to the law p(?1 , ?2 , : : : , ?P j Y : t ). In the same way we can 0

809

HUE ET AL.: TRACKING MULTIPLE OBJECTS WITH PARTICLE FILTERING

1 2 P 1 P show that (?? +1 , ?? +1 , : : : , ?? ), : : : , (?? +1 , : : : , ?? +1 ) are distributed according to p(?1 , : : : , ?P j Y : t ). 0 Moreover, provided that the conditional distributions p(?1 j x, ?2 , : : : , ?P ), : : : , p(?P j x, ?1 , : : : , ?P?1 ) are strictly positive, ?? is irreducible. These two conditions imply the convergence for ?-almost all ?0 of (?? )? 2N to ?.

[12]

[13]

REFERENCES [1] Reid, D. (1979) An algorithm for tracking multiple targets. IEEE Transactions on Automation and Control, 24, 6 (1979), 84—90. Fortmann, T. E., Bar-Shalom, Y., and Scheffe, M. (1983) Sonar tracking of multiple targets using joint probabilistic data association. IEEE Journal of Oceanic Engineering, 8 (July 1983), 173—184. Gauvrit, H., Le Cadre, J-P., and Jauffret, C. (1997) A formulation of multitarget tracking as an incomplete data problem. IEEE Transactions on Aerospace and Electronic Systems, 33, 4 (Oct. 1997), 1242—1257. Streit, R. L., and Luginbuhl, T. E. (1994) Maximum likelihood method for probabilistic multi-hypothesis tracking. In Proceedings of SPIE International Symposium, Signal and Data Processing of Small Targets 1994, 2235, Orlando, FL, Apr. 1994. Willett, P., Ruan, Y., and Streit, R. (1998) The PMHT for maneuvering targets. In Proceedings of the Conference on Signal and Data Processing of Small Targets, 3373 (SPIE Annual International Symposium on Aerosense), Orlando, FL, Apr. 1998. Rago, C., Willett, P., and Streit, R. (1995) A comparison of the JPDA and PMHT tracking algorithms. In Proceedings of the International Conference on Acoustics, Speech, Signal Processing, May 1995, 3571—3575. Gordon, N., Salmond, D., and Smith, A. (1993) Novel approach to nonlinear/non-Gaussian Bayesian state estimation. IEE Proceedings, Pt. F, Radar and Signal Processing, 140, 2 (Apr. 1993), 107—113. Isard, M., and Blake, A. (1998) CONDENSATION–Conditional density propagation for visual tracking. International Journal of Computer Vision, 29, 1 (1998), 5—28. ¨ ¨ Hurzeler, M., and Kunsch, H. R. (1998) Monte Carlo approximations for general state space models. Journal of Computational and Graphical Statistics, 7 (1998), 175—193. Musso, C., and Oudjane, N. (1999) Particle methods for multimodal filtering. Presented at the 2nd International Conference on Information Fusion IEEE, Silicon Valley, CA, July 6—8, 1999. Avitzour, D. (1995) Stochastic simulation Bayesian approach to multitarget approach. IEE Proceedings, Radar, Sonar Navigation, 142, 2 (Apr. 1995), 41—44.

[14]

[2]

[15]

[3]

[16]

[4]

[17]

[5]

[18]

[19]

[6]

[20]

[7]

[21]

[22]

[8]

[23]

[9]

[24]

[10]

[25]

[11]

[26]

Gordon, N. (1997) A hybrid bootstrap filter for target tracking in clutter. IEEE Transactions on Aerospace and Electronic Systems, 33, 1 (1997), 353—358. MacCormick, J., and Blake, A. (1999) A probabilistic exclusion principle for tracking multiple objects. In Proceedings of the 7th International Conference on Computer Vision, Kerkyra, Greece, Sept. 20—27, 1999, 572—580. MacCormick, J., and Israd, M. (2001) Bramble: A Bayesian multiple-blob tracker. In Proceedings of the 8th International Conference on Computer Vision, 2, Vancouver, Canada, July 9—12, 2001, 34—41. Schulz, D., Burgard, W., Fox, D., and Cremers, A. B. (2001) Tracking multiple moving targets with a mobile robot using particle filters and statistical data association. In Proceedings of IEEE International Conference on Robotics and Automation, Seoul, Korea, May 21—26, 2001, 1665—1670. Doucet, A. (1998) On sequential simulation-based methods for Bayesian filtering. Technical report, CUED/F-INFENG/TR 310, Signal Processing Group, Department of Engineering, University of Cambridge, 1998. MacCormick, J. (2000) Probabilistic Modelling and Stochastic Algorithms for Visual Localisation and Tracking. Ph.D. dissertation, University of Oxford, Jan. 2000. Doucet, A., De Freitas, N., and Gordon, N. (Eds.) (2001) Sequential Monte Carlo Methods in Practice. New York: Springer, 2001, 525—532. Salmond, D. J. (1990) Mixture reduction algorithms for target tracking in clutter. SPIE Signal and Data Processing of Small Targets, 1305 (1990), 434—445. Kong, A., Liu, J. S., and Wong, W. H. (1994) Sequential imputation method and Bayesian missing data problems. Journal of American Statistical Association, 89 (1994), 278—288. Liu, J. S. (1996) Metropolized independent sampling with comparison to rejection sampling and importance sampling. Statistics and Computing, 6 (1996), 113—119. Del Moral, P., and Guionnet, A. (1999) Central limit theorem for nonlinear filtering and interacting particle systems. The Annals of Applied Probability, 9, 2 (1999), 275—297. Doucet, A. (2000) Convergence of sequential Monte Carlo methods. Technical report, CUED/F-INFENG/TR 381, Signal Processing Group, Departement of Engineering, University of Cambridge, 2000. Li, X., and Jilkov, V. (2000) A survey of maneuvering target tracking: Dynamic models. Presented at the Conference on Signal and Data Processing of Small Targets, Orlando, FL, Apr. 2000. Hassab, J. C. (1989) Underwater Signal and Data Processing. Boca Raton, FL: CRC Press, 1989, 221—243. Peach, N. (1995) Bearings-only tracking using a set of range parameterised extended Kalman filters. IEE Proceedings—Control Theory Appl., 142, 1 (Jan. 1995). JULY 2002

810

IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 38, NO. 3

[27]

[28]

[29]

[30]

[31]

[32]

Grossman, W. (1994) Bearings-only tracking: A hybrid coordinate system approach. Journal of Guidance, Controls, and Dynamics, 17, 3 (May—June 1994), 451—457. Bar-Shalom, Y., and Fortmann, T. E. (1988) Tracking and Data Association. New York: Academic Press, 1988. Karlsson, R., and Gustafsson, F. (2001) Monte Carlo data association for multiple target tracking. In Proceedings of IEE Seminar—Target Tracking: Algorithms and Applications (Oct. 2001), 13/1—13/5. Orton, M., and Marrs, A. (2001) A Bayesian approach to multi-target tracking and data fusion with out-of-sequence measurements. In Proceedings of IEE Seminar–Target Tracking: Algorithms and Applications (Oct. 2001), 15/1—15/5. Celeux, G., and Diebolt, J. (1992) A stochastic approximation type EM algorithm for the mixture problem. Stochastic and Stochastic Reports, 41 (1992), 119—134. Gelfand, A. E., and Smith, A. F. M. (1990) Sampling-based approaches to calculating marginal densities. Journal of the Americal Statistical Association, 85 (1990), 398—409.

[33]

[34]

[35]

[36]

[37]

Casella, G., and George, E. I. (1992) Explaining the Gibbs sampler. The American Statistician, 46, 3 (1992), 167—174. Smith, A., and Roberts, G. (1993) Bayesian computation via the Gibbs Sampler and related Markov chain Monte Carlo methods. Journal of the Royal Statistical Society, Series B, 55, 1 (1993), 3—24. Diebolt, J., and Robert, C. P. (1994) Estimation of finite mixture distributions through Bayesian sampling. Journal of the Royal Statistical Society, Series B, 56 (1994), 363—375. Stephens, M. (1997) Bayesian methods for mixtures of normal distributions. Ph.D. dissertation, Magdalen College, Oxford, 1997. de Freitas, J. F. G. (1999) Bayesian methods for neural networks. Ph.D. dissertation, Trinity College, University of Cambridge, 1999.

Carine Hue was born in 1977. She received the M.Sc. degree in mathematics and computer science in 1999 from the University of Rennes, France. Since 1999, she has been pursuing the Ph.D. degree with IRISA, Rennes. She works on particle filtering methods for tracking in signal processing and image analysis.

HUE ET AL.: TRACKING MULTIPLE OBJECTS WITH PARTICLE FILTERING 811

Jean-Pierre Le Cadre (M’93) received the M.S. degree in mathematics in 1977, the “Doctorat de 3?eme cycle” in 1982, and the “Doctorat d’Etat” in 1987, both from INPG, Grenoble. From 1980 to 1989, he worked at the GERDSM (Groupe d’Etudes et de Recherche en Detection Sous-Marine), a laboratory of the DCN (Direction des Constructions Navales), mainly on array processing. Since 1989, he is with IRISA/CNRS, where he is “Directeur de Recherche” at CNRS. His interests are now topics like system analysis, detection, multitarget tracking, data association, and operations research. Dr. Le Cadre has received (with O. Zugmeyer) the Eurasip Signal Processing best paper award (1993).

? Patrick Perez was born in 1968. He graduated from Ecole Centrale Paris, France, in 1990. He received the Ph.D. degree in signal processing and telecom. From the University of Rennes, France, in 1993. After one year as an Inria post-doctoral fellow at Brown University, Providence, RI (Department of Applied Mathematics), he was appointed as a full-time Inria researcher. In 2000 he joined Microsoft Research Cambridge, UK. His research interests include probabilistic models for image understanding, high dimensional inverse problems in image analysis, analysis of motion, and tracking in image sequences.

812 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 38, NO. 3 JULY 2002

赞助商链接

- Robust Global Localization Using Clustered Particle Filtering
- multiple object tracking with kernel particle filting
- A Tutorial on Particle Filters for Online Nonlinear_Non-Gaussian Bayesian Tracking
- Monte Carlo Filtering for Multi-Target Tracking and Data Association
- Tracking multiple vehicles using foreground, background and motion models
- multiple object tracking with kernel particle filting
- Multiple Objects Tracking with Multiple Hypotheses Graph Representation (2006,IEEE)
- The use of particle filtering with the unscented transform to schedule sensors multiple ste
- a robust visual tracking approach with adaptive particle filtering
- Real-time tracking of multiple skin-colored objects with a possibly moving camera
- Multiple Objects Tracking with Multiple Hypotheses Graph Representation
- A Particle Filtering Approach to Joint Passive Radar Tracking and Target Classification, Do
- Contour Tracking in 2D Images Using Particle Filtering
- detection and tracking of multiple moving objects with occlusion in smart video surveillance systems
- Tempo Tracking of Musical Signals with Particle Filtering

更多相关文章：
更多相关标签：