9512.net
甜梦文库
当前位置:首页 >> >>

Extension of the ICP algorithm to non-rigid intensity-based registration of 3D volumes


Extension of the ICP Algorithm to non rigid Intensity-based Registration of 3D Volumes
Jacques Feldmar, Gr? goire Malandain, J? r? me Declerck and Nicholas Ayache e eo INRIA SOPHIA, Projet EPIDAURE 2004 route des Lucioles, B.P. 93 06902 Sophia Antipolis Cedex, France. Email : Jacques.Feldmar@sophia.inria.fr Tel: (33) 93-65-79-27, Fax: (33) 93-65-76-69

Abstract
We present in this paper a new registration and gain correction algorithm for 3D medical images. It is intensity based. The basic idea is to represent the images by 4D points xj ; yj ; zj ; ij  and to de?ne a global energy function based on this representation. For minimization, we propose a technique which does not require to compute the derivatives of this criterion with respect to the parameters. It can be understood as an extension of the Iterative Closest Point algorithm [4, 37] or as an application of the formalism proposed in [9]. Two parameters allow us to have a coarse to ?ne strategy both for resolution and deformation. Our technique presents the advantage to minimize a well de?ned global criterion, to deal with various classes of transformations (for example rigid, af?ne and volume spline), to be simple to implement and to be ef?cient in practice. Results on real brain and heart 3D images are presented to demonstrate the validity of our approach.

1. Introduction
Registration is a key problem in medical imaging. Indeed, the physician must often compare or fuse different images. The problem is as follow: given two 3D images, ?nd the geometric transformation that best superposes them, with respect to some constraints. If the two images come from the same patient and from a rigid anatomical organ, then the problem is rigid registration. Otherwise it is nonrigid registration. The registration techniques can be classi?ed into two classes: 1) techniques using additional arti?cial markers and 2) techniques without additional arti?cial markers. The ?xation of markers can be very invasive or can produce unacceptable constraints. This paper is related to the second

class of techniques. Different methods have been proposed to try to solve the registration problem without additional markers. Usually, registration is based on a representation computed from the 3D images. This representation can be high level (graphs, crest points, crest lines), middle level (surfaces, contours) or low level. We do not review in this paper registration methods based on high and middle level representations. They are quite numerous. A complete review can be found in [2, 5, 33, 16] : we just quote the most recent papers. The registration method presented in this paper is related to intensity based techniques. Most of them are for 3D3D rigid registration and try to maximize the correlation [36, 20] or the mutual information [28, 11, 34] between the two images. The basic idea is that a high or middle level representation can be dif?cult to compute either because of the image acquisition modality or because the organs do not have well de?ned contours in the images. Brain images are a good example of images dif?cult to segment. A lot of research has been done to try to solve this problem [21, 35, 24]. However, matching of two MR brain images coming from two different patients has an important application. Indeed, we have access to a brain image which has been manually segmented (or labeled) (courtesy of Ron Kikinis, at the Brigham and Women’s hospital, Boston) and we use it as an anatomical atlas. Hence, non rigid interpatient registration allows us to label automatically an MR image of a new patient into anatomical regions thanks to the knowledge of voxel to voxel correspondence. Some interesting research has been done to compute this matching based on crest lines, surfaces or contours [29, 27, 16, 31] or based directly on the intensities in the images [3, 8, 18, 32, 12]. These techniques are either physics based or are based on local information and do not minimize a global criterion. The algorithm presented in this paper is different: we compute a global geometric transformation

minimizing an explicit global criterion. Note that this idea to deform an atlas towards an image can be criticized [23]. However we believe that our method can be useful at least as a preprocessing step before a more sophisticated approach is applied. One of the dif?culties is to segment (for point based techniques) or to correct the intensity (for intensity based techniques) of the images to register. This is because of the shape of the brain but also because of the gain problem in the MR images. We present in this paper a new registration and gain correction algorithm which is intensity based. This algorithm has numerous applications. Indeed, it should allow us to perform registration when contour (or higher level features) extraction is dif?cult. It is an extension of the ICP algorithm [4, 37, 25, 7, 6]. Our technique presents the advantage to minimize a well de?ned global criterion, to deal with various classes of transformations (for example rigid, af?ne and volume spline), to be simple to implement and to be ef?cient in practice. In this paper, we ?rst present the representation on which the algorithm is based and the corresponding minimized criterion (section 2). Then, we describe the minimization algorithm (section 3) and the computation of the representation (section 4). Finally, we present the ?rst results obtained with brain which demonstrate the validity of our approach (sections 5 and 6). We conclude by presenting the future directions of this work.

2.2

Our criterion

In our formulation, we consider 3D images as surfaces in a 4D space. Hence, an image corresponding to a function i = I x; y; z  is represented by a set of 4D points xj ; yj ; zj ; ij . The ?rst three coordinates are spatial coordinates and the fourth one is an intensity coordinate1 . We propose to minimize a global criterion measuring the correlation between the two images by deforming the scene image into the model image. In our formulation, it means that we deform the 4D surface. Then, we both correct the geometry and the intensity in the image2 . The minimized energy function is as follows:

E f ; g =


d f xj ; gxj ; ij ; PPP4D f xj ; gxj ; ij  1=2
where

x ;i
j

X

j 2

IMAGESCENE

(1)

xj denotes the three spatial coordinates of point Mj ,
i.e.


xj ; yj ; zj .

f is a function which associates a 3D point with a 3D
point: it is the geometric transformation of the scene image. Note that it does not depend on the intensity.

g is a function which associates a scalar value with
a 4D point: it associates a new intensity to a point in the image depending on its position and its current intensity.

2
2.1

A global correlation criterion
The classical criterion

PPP4D is the function which associates to a 4D point

The most straightforward idea for registering two images i1 and i2 with intensity functions I1 x; y; z  and I2 x; y; z  is probably to minimize the criterion:

its closest point among the points describing the model image. Finally, d is a function such that given two 4D points M = x; i and N = x0 ; i0,

C f  =

X

Mi 2i1

 2 

I f Mi  , I1 Mi 2 ;

dM; N  =

 1 3

where f is a 3D-3D geometric transformation. If it is necessary to correct the intensity to register the two images, one can minimize the following criterion:

x , x 2 + 2 y , y 2 + z , z 2 + 4i , i 2 1=2 ;
0 0 0 0

where the i are coef?cients normalizing each coordinate between 0 and 1. Of course, a key point is to choose the de?nition domain of the function E : this constraints the functions f and g. For example, if two images come from the same anatomical object and if the voxel intensities correctly represent the associated tissues, then the searched function f will be a rigid displacement and g will be ?xed to be the identity function.
1 More details about the choice of these 4D points to represent the images will be given in section 4.1. 2 Of course, we choose a very global function for intensity correction otherwise registration would not make sense.

C 0f; g =

X

Mi 2i1

 2 

I f Mi  , gI1 Mi ; Mi  ;
2

where f is a 3D-3D geometric transformation and g is an intensity correction function. But these formulations have a drawback: they search for an exact superposition of the two images although this might not always be possible with the considered class of transformations or deformations. We want to bring nearer the points with close intensity but we want to keep a constrained deformation.

When the two anatomical regions do not come from the same patient, or when the anatomical region is deformable, the function f can be an af?ne or spline function (see appendix A). When the intensity in the images is perturbed by a distortion, one can also search g as an af?ne, polynomial or spline function, depending on the physical analysis of the pertubation.

derivatives of these functions with respect to the parameters. It can be understood as an extension of the Iterative Closest Point algorithm [4, 37] or as an application of the formalism proposed in [9].

3.1

The algorithm

2.3

Smoothing

When f and g are deformation functions, it is often necessary to add a smoothing term to E . In these cases, the new energy function Esmooth is:

stages.

i, we compute two new estimates fi and gi of f and g in two
Stage 1: we build a set of pairs of 4D points Matchi by associating with each point Mj in the scene image the point Nj such that:

The minimization algorithm is iterative. At each iteration

Esmooth f;X= E f; g+ g  Smoothnessf; g; xj ; ij ;
xj

;ij  IMAGESCENE
2

Nj = PPP4D  fi,1xj ; gi,1Mj  ;
where

where Smoothnessf ; g; xj ; ij  is the sum of the norm of the second derivatives of f and g with respect to each of their coordinates ("bending energy") respectively at points xj  and xj ; ij .

Matchi is made of the pairs Mj ; Nj .

xj

are the three spatial coordinates of

Mj .

Stage 2: we simply compute in the least square sense the best transformations fi and gi corresponding to Matchi. These are fi and gi minimizing the criterion:


2.4

Use of the gradient information

Even if the algorithm deals directly with intensities in the images, it can be desirable to enhance the importance of areas where the intensity varies a lot. These are the areas where the norm of the gradient is high. To obtain this result, we weight each term of the energy E with the norm of the gradient at the corresponding point:

X d f x ;g M ;N j j j M ;N X 2Match Smoothness f ; g; M
    
j j i

 +

2



Mj ;Nj 2Matchi



j :

E f ; g  =
xj

X

For the rigid, af?ne and spline transformations classes and for the smoothing terms which we are using, this criterion is quadratic and the least square estimation turns out to be the resolution of a linear system. The fact that this algorithm minimizes the de?ned energy and the convergence is easy to demonstrate3 . Let us de?ne the energy function E 0 :

~ krIMAGESCENE xj k: d fxj ; gxj ; ij ;

;ij  IMAGESCENE
2

(2)

PPP4D f xj ; gxj ; ij  1=2

Note that it is possible to use the information of gradient direction. In this case, the points representing the images are no longer 4D points. They are 7D points: three spatial coordinates, one intensity coordinate and three gradient coordinates. Hence, the distance between two points is a compromise between the spatial distance, the difference of gradient norm and orientation and the difference of intensity. The minimization of the corresponding energy tends to minimize this difference between the two sets of points describing the images. This way of using the gradient information is similar to the use of surface normals presented in [17] for rigid surface registration.

E 0f ; g;X  = Match
Mj 2IMAGESCENE Mj 2IMAGESCENE

X

df xj ; gMj ; MatchMj 2+ Smoothnessf ; g; Mj :

At stage 1 of our algorithm, the variables f and g are ?xed and E 0 is minimized with respect to Match. Indeed, in this case, the function Match minimizing E 0 is such that:

MatchMj  = PPP4D f xj ; gMj :
At stage 2, the variable Match is ?xed and E 0 is minimized with respect to f and g. Thus, at each stage, E 0 decreases.
3 Of course, it does not guarantee that we are going to ?nd the global minimum. We are minimizing a non convex function and we can just prove the convergence towards a potentially local minimum.

3

Minimization technique

To minimize the energy function E or Esmooth , we developed a technique which does not require to compute the

Because E 0 is positive, the convergence is guaranteed, even if one can converge towards a local minimum and if the convergence can require an in?nite time. This minimization technique is very ef?cient. Contrary to the classical minimization techniques, it is not "local" (the transformation parameters can vary a lot between two successive iterations) and it does not require either the computation of the derivative of E with respect to the parameters or the tuning of some parameters. On the other hand, it assumes that each point in the scene image has a correspondent in the model image.

x; y; z are the spatial coordinates of the voxel’s center and i its intensity. Let us call M the point representing the voxel V . One could use the representing points but the
minimization algorithm would be inef?cient because of the data volume. To avoid this problem, we developed an algorithm to compute a more compact representation of the images. The basic idea is to split recursively the image into "quadrilaterals" until each "quadrilateral" contains only voxels of which the representative points can be approximated by a 4D hyperplan with an error smaller than , where is a parameter of the splitting algorithm5 . The details of this algorithm can be found in [22]. As a result, the image is represented by a set of "quadrilaterals" with different sizes containing relatively homogeneous voxels. A 4D centroid and a 4D hyperplan are attached to each "quadrilateral". The points Mj used to describe the scene image in the minimization algorithm are simply the centroid attached to the "quadrilaterals" obtained by recursively splitting this image. To compute the function PPP4D in an ef?cient way, the model image is also splited into "quadrilaterals". A 4D kd-tree [26] is calculated based on the centroids Bj resulting from the recursive split. During minimization, given a 4D point M , PPP4D M  is computed as follows (see ?gure 1). Thanks to the kd-tree, we ?rst ?nd the centroid Bj the closest to M . This centroid corresponds to a "quadrilateral" Qj .

3.2

The occlusion problem

However, some points in one image do not have any correspondent in the other one because of potential occlusions. Another reason can come from the evolution of a pathology (a tumor for example). It is very important to deal explicitly with this occlusion problem to get an accurate transformation. One could do it thanks to a robust criterion (in the sense of the statistics) like in [19, 10] for rigid surface registration. But stage 2 of the algorithm would not be a linear system resolution anymore and the algorithm would not be so ef?cient. We prefer the approach proposed in [37] for 3D-3D rigid surface registration. For each match Mj ; Nj  in Matchi , we decide if it is plausible or not. This point is very important because if we accept erroneous matches, the solution will be biased and if we reject correct matches, the solution will not be accurate. For each pair Mj ; Nj  in Matchi, we compute the 4D Euclidean distance:

PPP4D M  is the closest point to M onto the hyper-

j

=

dMj ; Nj :
4.2

plan Hj approximating the "quadrilateral" and lying in this "quadrilateral".

One can suppose that if the registration was correct, this variable j would follow a 2 law. Thus, we can compare the statistics of this variable j with a 2 with 4 degrees of freedom and decide that a pair Mj ; Nj  is plausible looking at a 2 table with a con?dence value of say 95 % or 99 %. Stage 2 of the minimization algorithm is simply modi?ed so that the least square criterion takes into account only the plausible matches in Matchi 4 .

A coarse to ?ne multi-resolution strategy

4
4.1

The representation of the images
Computing the representation

A fundamental point for this registration algorithm is to choose the 4D points representing the images. One can associate with each voxel V a point M = x; y; z; i, where
4 Just note that it would be possible to weight the criterion with a quantity inversely proportional to j and our minimization algorithm would still converge.

The parameter of the image splitting algorithm allows us to control the quality of the approximation. The smaller , the better the approximation. However, when is large, the number of points/hyperplans describing the image is small. Thus, allows us to control the resolution. It is also important to control the "quantity" of accepted deformation. We compute ?rst rigid displacements, then af?ne transformations and ?nally spline deformations. For this last class of transformations, we can also choose the number of control points and the parameter  controlling the importance of the bending term in the criterion de?nition in order to control the "quantity" of allowed deformation. The strategy that we propose to try to avoid the local minima during the minimization uses these two properties.
5 The error can be the maximum of the points to plane distances or the average of these distances.

M

Mj

Bj Hj

and resampling of the 3D image corresponding to the top right image. For the registration, approximately 50000 points are used to describe the images. The resolution does not vary during the iterations. The CPU time is 5 minutes. After registration, the average mean distance between matched 4D points is 0.8 mm and the average difference of intensity is 2.8 (the intensity in the images is between 0 and 255). One can observe that after registration the two slices look much more similar both from the geometric and intensity viewpoints.

Qj

5.2
Figure 1. The computation of PPP4 D is done in two stages. First, we compute the centroid Bj the closest to M and then, we project M onto Hj .

Matching with an atlas

At the beginning, we choose a low resolution and we compute rigid displacements. The biggest structures are then ?rst registered. Then, progressively during the minimization process, we decrease in order to enhance the quality of the approximation of the images and we allow more and more deformation from af?ne transformations to spline deformations. Of course, this description of the strategy is very qualitative. In practice, the choice of the functions controlling these resolution and deformation parameters depends on the anatomical regions to register. But they have not been dif?cult to ?nd in practice.

Figures 3, 4 et 5 show an example of spline registration of two MR images of two different brains. In fact, one of the two images has been manually segmented into anatomical regions (courtesy of Ron Kikinis, at the Brigham and Women’s hospital, Boston). It can be used as an anatomical atlas. Indeed, the matching allows us to label automatically the second image thanks to the knowledge of the voxel to voxel correspondence. Registration uses in each image approximatively 50000 points. The computed deformation is a volume spline function (see appendix A) with 151515 control points. There is no intensity correction. The CPU time is 25 minutes. The resolution varies linearly from 10000 points to 50000 points during the deformation process and  (the parameter controlling the smoothing term of the criterion) varies linearly from 5 to 2. The registration is not perfect, even if it is not bad. This is due to two reasons: we should use more 4D points to describe the images at the end of the process, the spline deformation is not local enough (15  15  15 control points is the maximum that we can use because of memory limitations). We believe that these two problems should be ?xed in the future thanks to more powerful workstations (we use a DEC alpha). Another possibility for ?xing this problem may be to use radial bases functions [1] or adaptative splines [31]. We just want to note that our deformation seems to be better than the one based on Talairach atlas techniques. Moreover, because our deformation is quite global, we can guarantee that the result of our non rigid registration makes sense. Note also that before registration, it was quite dif?cult to identify in the two images the corresponding structures while it is quite easy after registration. If a more local registration was necessary, the output of our algorithm could be a good input for techniques like [32, 8] which are maybe more sensitive to their initialization but which are more local.

5

Results on brain data

We present in this section two examples of application of our volume registration algorithm with 3D brain MR images.

5.1

Rigid registration with intensity correction

The top two images of ?gure 2 are two slices of same index of two MR images of the same brain. One can observe that (1) geometric registration is necessary since the two slices do not correspond to each other (for example the eyes are visible in one image and are not visible in the other one) (2) the left image is much brighter than the right one. The algorithm described in this paper allows us to compute at the same time a rigid displacement to superpose the two images and a multiplication coef?cient to correct the intensity. The bottom image of ?gure 2 shows the slice corresponding to the slice of the top left image after registration

6

Results on heart data

A common examination for detection of cardiac ischemia is the stress-rest comparison in myocardial perfusion studies provided by Nuclear Medicine. Experiments on a 40 patients’ database of such data are presented in [15].

7

Conclusion

We have presented in this paper a new and ef?cient algorithm for registration-intensity correction of 3D images. This algorithm is an extension of the original ICP algorithm to volume registration and deals explicitly with the occlusion problem. The computed transformations are rigid, af?ne or spline. The experiments demonstrate the validity of our approach even if a complete clinical validation is necessary. For future work, the technique presented in this paper has at least three very interesting extensions. The ?rst problem is the use of another transformation class for non-rigid registration. It could be radial bases functions [1]. The learningbased deformations are also very interesting [13, 30]. It should be possible to perform some statistics (thanks to a database) on the positions of the control points to constraint the possible deformations. The second problem is the use of volume spline registration for motion tracking in sequences of 3D images (MR, SPECT or f-MRI). Indeed time sequences of 3D images will probably be more and more common and we believe that analysis of such images is a challenge. The third problem is "atlas building". The problem is as follow: given a set of images with known diagnosis, assuming that one is able to perform non rigid registration, build a data structure which will allow us, given an image not present in the database, to provide a predicted diagnosis. We would like to show that our spline registration improves the quality of the results with respect to the af?ne registration techniques used in most of the papers dealing with this problem. We hope that some ?rst results for these three problems will be available at the time of the workshop.

uted knots. In our formulation, the class of 3D-3D spline function is only described by moving the control points. In the de?nition of the criterion, a smoothing energy is added to the least square term on the position in order to control the regularity of the solution. This energy is expressed as a second order Tikhonov stabilizer. The criterion is the sum of the two energies, a multiplying weight  ponderates the importance of the smoothing energy with respect to the position energy. y x z Because the criterion is quadratic in Cijk, Cijk and Cijk, the least square minimization at stage 2 of the algorithm presented in this paper is a linear system resolution, which is quite ef?cient in practice. We have chosen the 3D-3D spline functions for ef?ciency but also because they have interesting geometric properties: the 3D-3D spline functions and their derivatives are easy to compute thanks to the "de Casteljau" algorithm, the intrinsic rigidity properties of B-splines provide regular 3D-3D functions, a data point has a local in?uence: to evaluate a spline function at a given point, only K + 13 control points are necessary, where K is the spline order. For more details about spline functions, see [14].

References
[1] N. Arad and D. Reisfeld. Image warping using few anchor points and radial functions. Computer Graphics Forum, 14(1):35–46, March 1995. [2] N. Ayache. Medical computer vision, virtual reality and robotics. Image and Vision Computing, 13(4):295–313, May 1995. [3] R. Bajcsy and S. Kovacic. Multiresolution elastic matching. CVGIP, 46:1–21, 1989. [4] P. Besl and N. McKay. A method for registration of 3 D shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 14(2):239–256, February 1992. [5] L. G. Brown. A survey of image registration techniques. ACM Computing Surveys, 24(4):325–375, December 1992. [6] G. Champleboux, S. Lavall? e, R. Szeliski, and L. Brunie. e From accurate range imaging sensor calibration to accurate model-based 3 D object localization. In Proceedings of the IEEE Conferenceon Vision and Pattern Recognition, Urbana Champaign, June 1992. [7] Y. Chen and G. Medioni. Object modeling by registration of multiple range images. Image and Vision Computing, 10(3):145–155, 1992. [8] G. Christensen, R. Rabbit, M. Miller, S. Joshi, U. Grenander, T. Coogan, and D. VanEssen. Topological properties of smooth anatomic maps. In Information Processing in Medical Imaging (IPMI ’95), Brest, June 1995.

,

A

3D-3D volume spline deformations

We have chosen to implement the algorithm presented in this paper with spline deformations. It is the class of functions get by tensor product of spline bases functions:

,

f x; y; z  = 

y x z where the Cijk = Cijk; Cijk; Cijk are the control points and the Bi are 1-D B-spline functions with regularly distrib-

P CijkB x B y Bk z x i;j;k y Pi;j;k CijkBii x Bjj y Bk z ;; Pi;j;k CijkBi x Bj y Bk z ; z
                 

[9] L. Cohen. Use of auxiliary variables in computer vision problems. In Proceedings of the Fifth International Conference on Computer Vision (ICCV ’95), Boston, June 1995. [10] A. Colchester, J. Zhao, C. Henri, R. Evans, P. Roberts, N. Maitland, D. Hawkes, D. Hill, A. Strong, D. Thomas, M. Gleeson, and T. Cox. Craniotomy simulation and guidance using a stereo video based tracking system (vislan). In Visualization in Biomedical Computing, Rochester, Minnesota, October 1994. [11] A. Collignon, F. Maes, D. Delaere, D. Vandermeulen, and P. S. ang G. Marchal. Automated multi modality image registration using information theory. In Information Processing in Medical Imaging (IPMI ’95), Brest, June 1995. [12] D. Collins, A. Evans, C. Holmes, and T. Peters. Automated 3d segmentation of neuro-anatomical structures. In Information Processing in Medical Imaging (IPMI ’95), Brest, June 1995. [13] T. Cootes, A. Hill, and C. Taylor. Rapid and accurate medical images interpretation using active shape models. In Information Processing in Medical Imaging (IPMI ’95), Brest, June 1995. [14] J. Declerck, G. Subsol, J. Thirion, and N. Ayache. Automatic retrieval of anatomical structures in 3d medical images. In First International Conference on Computer Vision, Virtual Reality and Robotics in Medicine (CVRMed ’95), Nice, April 1995. [15] J. Feldmar. Recalage rigide, non rigide et projectif d’images medicales tridimensionnelles. PhD thesis, Ecole Polytechnique-INRIA, December 1995. [16] J. Feldmar and N. Ayache. Rigid, af?ne and locally af?ne registration of free-form surfaces. The International Journal of Computer Vision. Accepted for publication. Published in two parts in ECCV’94 (rigid and af?ne) and CVPR’94 (locally af?ne). Also INRIA Research Report 2220. [17] J. Feldmar, N. Ayache, and F. Betting. 3d-2d projective registration of free-form curves and surfaces. The Journal of Computer Vision and Image Understanding. Published in three parts: CVRMed’95, ICCV’95 and IPMI’95. Also INRIA Research Report 2434. [18] J. Gee, L. Lebriquer, C. Barrillot, and D. Haynor. Probabilistic matching of brain images. In Information Processing in Medical Imaging (IPMI ’95), Brest, June 1995. [19] W. Grimson, T. Lozano-Perez, W. W. III, G. Ettinger, S. White, and R. Kikinis. An automatic registration method for frameless stereotaxy,image guided surgery,and enhanced reality visualization. In IEEE Proceedings of Computer Vision and Pattern Recognition 1994 (CVPR’94), Seattle, USA, June 1994. [20] F. Hemler, P. V. D. Elsen, T. Sumanaweera, S. Napel, J. Drace, and J. Adler. A quantitative comparison of residual errors for three different multimodality registration techniques. In Information Processing in Medical Imaging (IPMI ’95), Brest, June 1995. [21] G. Malandain. Filtrage, topologie et mise en correspondance d’images m? dicales multidimensionnelles. PhD thesis, Ecole e Centrale de Paris, Septembre 1992. [22] G. Malandain. Reprsentation des images par des arbres binaires. Research report, INRIA, 1995. To appear.

[23] J. Mangin, V. Frouin, I. Bloch, B. Bendriem, and J. LopezKrahe. Fast Nonsupervised 3D Registration of PET and MR Images of the Brain. Journal of Cerebral Blood Flow and Metabolism, 14(5):749–762, 1994. [24] J. Mangin, F. Tupin, V. Frouin, I. Bloch, R. Rougetet, J. Regis, and J. Lopez-Krahe. Deformable topological models for segmentation of 3d medical images. In Information Processing in Medical Imaging (IPMI ’95), Brest, June 1995. [25] Y. H.-T. Menq, C.-H. and G.-Y. Lai. Automated precision measurement of surface pro?le in cad-directed inspection. IEEE Trans. RA, 8(2):268–278, 1992. [26] F. P. Preparata and M. I. Shamos. Computational Geometry, an Introduction. Springer Verlag, 1985. [27] S. Sandor and R. Leahy. Towards automated labelling of the cerebral cortex using a deformable atlas. In Information Processing in Medical Imaging (IPMI ’95), Brest, June 1995. [28] C. Studholme, D. Hill, and D. Hawkes. Multi-resolution voxel similarity measures for mr-pet registration. In Information Processing in Medical Imaging (IPMI ’95), Brest, June 1995. [29] G. Subsol. Construction automatique d’atlas anatomiques partir d’images mdicales tridimensionnelles. PhD thesis, Ecole Centrale, 1995. [30] G. Szekely, A. Kelemen, C. Brechbler, and G. Gerig. Segmentation of 3d object from mri volume data using constrained elastic deformations of ?exible fourrier surface models. In First International Conference on Computer Vision, Virtual Reality and Robotics in Medicine (CVRMed ’95), Nice, April 1995. [31] R. Szeliski and S. Lavall? e. Matching 3-d anatomical sure faces with non-rigid volumetric deformations. In Proceedings of the IEEE Workshop on Biomedical Images Analysis (WBIA’94), Seattle, Washington, June 1994. Also in AAAI 1994 Spring Symposium Series. Application of Computer Vision in Medical Image Processing, Stanford University, 1994. [32] J.-P. Thirion. Fast non-rigid matching of 3d medical images. In Medical Robotics and Computer Aided Surgery (MRCAS’95), pages 47–54, Baltimore, November 1995. [33] P. van den Elsen, E. Pol, and M. Viergever. Medical image matching. a review with classi?cation. IEEE Engineering in Medecine and Biology, 12(4):26–39, 1993. [34] P. Viola and W. M. W. III. Alignment by maximisation of mutual information. In Proceedings of the Fifth International Conference on Computer Vision (ICCV ’95), Boston, June 1995. [35] W. Wells, E. Grimson, R. Kikinis, and F. Jolesz. Adaptative segmentation of mri data. In First International Conference on Computer Vision, Virtual Reality and Robotics in Medicine (CVRMed ’95), Nice, April 1995. [36] R. Woods, S. Cherry, and J. Mazziotta. Rapid automated algorithm for aligning and reslicing PET images. Journal of Computer Assisted Tomography, 16(1):1–14, 1992. [37] Z. Zhang. Iterative point matching for registration of freeform curves and surfaces. the International Journal of Computer Vision, 13(2):119–152, 1994. Also Research Report No.1658, INRIA Sophia-Antipolis, 1992.

Figure 2. Top: two axial slices of same index of two MR images (left A, right B) of the same brain before registration. We thank Pr. Ron Kikinis, Brigham and Women’s Hospital (Boston) for these images. Bottom, left: the slice of image B after registration, resampling and intensity correction corresponding to the slice of the top-left image. One can compare pixel by pixel the left two images.

Figure 3. Top: two slices sagittal of same index coming from two MR images of two different brains before registration (left C, right the atlas). We thank Pr. Ron Kikinis, Brigham and Women’s Hospital (Boston) for these images. Bottom, left: the slice of the resampled atlas corresponding to the top left image after non rigid spline registration. The two left images can be compared voxel by voxel.

Figure 4. Top: two axial slices of same index coming from the same images than the ones shown ?gures 3 and 4 (left C, right the atlas). Bottom, left: the slice of the resampled atlas corresponding to the top left image after non rigid spline registration.

Figure 5. Top: two frontal slices of same index coming from the same images than the ones shown ?gures 3 and 4 (left C, right the atlas). Bottom, left: the slice of the resampled atlas corresponding to the top left image after non rigid spline registration.


赞助商链接

更多相关文章:
更多相关标签:

All rights reserved Powered by 甜梦文库 9512.net

copyright ©right 2010-2021。
甜梦文库内容来自网络,如有侵犯请联系客服。zhit325@126.com|网站地图