• Nem Talált Eredményt

4.4 Application in biomedical imaging

4.4.2 Results with the MRF model

The extraction results shown in Fig. 4.9 and Fig. 4.10 demonstrate the effectiveness of the proposed multi-layer MRF GOC model for this type of task. Computation times vary from

∼20s to∼ 1000s for images of sizeN = 104. The key factor is the number of layers, with the minimum time corresponding toℓ = 2, the maximum toℓ= 6.

Figure 4.9: Extraction of cells from light microscope images using the multi-layer MRF GOC model.

Figure 4.10: Extraction of lipid drops from light microscope images using the multi-layer MRF GOC model.

I N T HIS C HAPTER :

5.1 Introduction . . . . 92 5.2 Problem statement . . . . 93 5.3 Solution via a nonlinear system of equations . . . . 94 5.3.1 Registration of 3D objects . . . . 97 5.4 Affine puzzle . . . . 99 5.4.1 Realigning object parts . . . . 99 5.5 Solution via a linear system of equations . . . 101 5.5.1 Construction of covariant functions . . . 102 5.5.2 Linear estimation of affine parameters . . . 103 5.5.3 Choosing the integration domain . . . 104 5.6 Discussion . . . 108 5.7 Medical applications . . . 109 5.7.1 Fusion of hip prosthesis X-ray images . . . 109 5.7.2 Registration of pelvic and thoracic CT volumes . . . 111 5.7.3 Bone fracture reduction . . . 111

5.

Linear registration of 2D and 3D objects Linear registration of 2D and 3D objects Linear registration of 2D and 3D objects Linear registration of 2D and 3D objects Linear registration of 2D and 3D objects Linear registration of 2D and 3D objects Linear registration of 2D and 3D objects Linear registration of 2D and 3D objects Linear registration of 2D and 3D objects Linear registration of 2D and 3D objects Linear registration of 2D and 3D objects Linear registration of 2D and 3D objects Linear registration of 2D and 3D objects Linear registration of 2D and 3D objects Linear registration of 2D and 3D objects Linear registration of 2D and 3D objects Linear registration of 2D and 3D objects Linear registration of 2D and 3D objects Linear registration of 2D and 3D objects Linear registration of 2D and 3D objects Linear registration of 2D and 3D objects

W

e consider the estimation of linear transformations aligning a known 2D or 3D object and its distorted obser-vation. The classical way to solve this reg-istration problem is to find correspondences between the two images and then compute the transformation parameters from these landmarks. Unlike traditional approaches, our method works without landmark extrac-tion and feature correspondences. Here we present how to find a linear transformation as the solution of either a polynomial or a linear system of equations without estab-lishing correspondences.

The basic idea is to set up a system of nonlinear equations whose solution di-rectly provides the parameters of the

align-ing transformation. Each equation is gen-erated by integrating a nonlinear function over the object’s domains. Thus the ber of equations is determined by the num-ber of adopted nonlinear functions yielding a flexible mechanism to generate sufficiently many equations. An alternative formulation of the method leads to a linear system of equations constructed by fitting Gaussian densities to the shapes which preserve the effect of the unknown transformation.

The advantages of the proposed solu-tions are that they are fast, easy to imple-ment, have linear time complexity, work with-out landmark correspondences and are in-dependent of the magnitude of transforma-tion.

5.1 Introduction

Registration is a crucial step when images of different views or sensors of an object need to be compared or combined. In a general setting, one is looking for a transformation which aligns two images such that one image (called the observation, or moving image) becomes similar to the second one (called the template, or model image). Due to the large number of possible transformations, there is a huge variability of the object signature. In fact, each observation is an element of the orbit of the transformations applied to the template. Hence the problem is inherently ill-defined unless this variability is taken into account.

Several techniques have been proposed to address the affine registration problem. By thresholding the magnitude of Fourier Transform of the images Zhang et al. [204] construct affine invariant features, which are insensitive to noise, in order to establish point correspon-dence. Several Fourier domain based methods [125, 133] represent images in a coordinate system in which the affine transformation is reduced to an anisotropic scaling factor, which can be computed using cross correlation methods. Govindu and Shekar [102] develop a framework that uses the statistical distribution of geometric properties of image contours to estimate the relevant transformation parameters. Main advantages of these methods is that they do not need point correspondences across views and images may also differ by the over-all level of illumination. A novel one-element voxel attribute, the distance-intensity (DI) is defined in [95]. This feature encodes spatial information at a global level, and the distance of the voxel to its closest object boundary, together with the original intensity information.

Then the registration is obtained by exploiting mutual information as a similarity measure on the DI feature space. For matching 2D feature points, [117] reduces the general affine case to the orthogonal case by using the means and covariance matrices of the point sets, then the rotation is computed as the roots of a low-degree complex coefficients polynomial.

Another direct approach [169] extends the given pattern to a set of affine covariant versions, each carrying slightly different information, and then extract features for registration from each of them separately. The transformation is parameterized at different scales, using a decomposition of the deformation vector field over a sequence of nested (multiresolution) subspaces in [159]. An energy function describing the interactions between the images is then minimized under a set of constraints, ensuring that the transformation maintains the topology in the deformed image. Manay et al. [145] explore an optimization framework for computing shape distance and shape matching from integral invariants, which are employed for robustness to high-frequency noise. Shape warping by the computation of an optimal reparameterization allows this method to account for large localized changes such as occlu-sions and configuration changes. In [116] a method for identifying silhouettes from a given set of Radon projections is presented. The authors study how the Radon transform changes when a given 2D function is subjected to rotation, scaling, translation, and reflection. Using these properties, the parameters of the aligning transformation are expressed in terms of the Radon transform. In [107] a computationally simple solution is proposed to the affine reg-istration of gray level images avoiding both the correspondence problem as well as the need for optimization. The original problem was reformulated as an equivalent linear