• Nem Talált Eredményt

Our developed method tries to combine the advantages of the feature based and the correlation based methods. The algorithm based on graph matching and uses the feature graphs generated by low-level image processing. But for each node (contains a feature) uses the image window aroung the feature to achieve a correlation based comparison.

Using the notations introduced in Section 2.6 the (sub)graph matching problem formally can be described as follows. Let Gα ={Vα(A),Eα(A)}and Gβ ={Vβ(A),Eβ(A)}are the graphs describing the images to match. The aim is to find the maximum common subgraph described by a non-bijective mapping m from the subgraph GαGαto GβGβ.

Viewing the different part of the scene from different positions yields, that the graphs describing scene are not the same. This means that the number of vertices and edges are not the same in the two graphs. Therefore there are entities, that have no correspondent pair. This situation can be handled introducing dummy (null, 0) vertices, hence the mapping can be described as m : (Vα0)(Vβ0).

Due to the different imaging conditions, not only the structure (connections) of graphs are different, but the attributes attached to the corresponding vertices and edges will be slightly different. Therefore a cost must be defined, that measures the dissimilarity of the respecting entities. The comparison based on error value (cost), the entities are more similar, the value (difference) is more close to zero.

The correspondences between nodes can be described by a match matrix M that contains the pairings between

graphs, such that M(a,i)=1 if node a in Gαcorresponds to node i in Gβ. Augmenting the matrix with an additional row and column as the container for null vertices, the complete set of correspondence relations can be described. The size of the matrix is (Nα+1)×(Nβ+1), where Nl,l=α, βdenotes the number of vertices.

2.7.1 Comparison of features

In order to reduce the computational costs of the algorithm, the features are compared to each other if they are close enough to each other (search window). Within that window features with similar type are checked. The comparison method depends on the type of the feature.

• Point features have no additional internal parameters, therefore two points are always similar (the cost is zero).

• Line features are compared by length, the cost function is lla

b1, if la <lb, llb

a −1 otherwise. This comparison supposes, that the corresponding line features have almost equal length, the effect of different projections and occlusions does not distort most of the line features.

• Conic features are compared by subtype (ellipse, hyperbola, parabola) and its similarity in parameter space.

The cost function in case of lines is used for the axes: lla

b −1 and LLa

b1, where l and L denotes the minor and major axis, respectively. The arc length is compared similarly ssa

b−1, where arc length is measured in parameter space.

As in case of lines, it is supposed that the effect of disturbing influences is small.

2.7.2 Correlation based comparison

The correlation based comparison is achieved as calculation of the zero mean normalized cross correlation coefficient within a windows around the features

Ccc(a,i)= P

w(Ia(w)Ia(w))(Ii(w)Ii(w)) qP

w(Ia(w)Ia(w))2(Ii(w)Ii(w))2 that maps the similarity into the range [−1,1], whereP

w() represents summation within the search window, Ia(w),Ii(w) denotes the image window around the image features a and i, respectively. Ia(w),Ii(w) are the average intensity within the window around the image features.

It could be possible to deform the correlation window according to the estimated affine transformation during the matching, but the calculation of the correlation values at every iteration of the matching (see Graph mathing algorithm) is a costly process and in typical imaging cases, the overlap of the non-deformed windows are large enough to produce acceptable results.

For lines the correlation value are calculated as the average of the correlation values at sample points along the line.

The orientation along the candidate lines must be aligned (corresponding endpoints must be determined). This is achieved in two ways. First comparing the angle of the respecting line computed as the difference of the endpoint and the start point with the horizontal axis. Second computing the distances between endpoints, the corresponding endpoints are the closer ones to each other.

2.7.3 Comparison of transformations

The edges in the feature graph contain the relation between the features connected by the given edge. The comparison of the transformations between the different feature pairs ab or i j is based on the the decomposition of the affine trans-formation resulted as A=AabAi j1. Instead of checking the closeness of A to the identity matrix, the transformation is decomposed into translation, scale, rotation and shear. As described in Section 2.6 the relations only contains rotation

and scaling components (Euclidean transformation), therefore A is also reduced to an Euclidean one. From this point of view, there is no need to use costly general transformation decomposition methods such as SVD, the transforma-tion parameters can be determined directly. The scaling and stretch are supposed to be unit. The translatransforma-tional part is tu=A(1,3) and tv=A(2,3). The rotation angle can be get such thatα=arctanA(2,1)A(1,1)+A(1,2)A(2,2).

The transformation comparison is achieved only when the connected nodes based on feature types are compatible.

The direction of the transformation represented by the edge is also determined before multiplication. The elements of comparison cost functions are normalized to one, such that the angle is mapped into the interval [−1 : 1], then positon error is normalized with the size of the images.

2.7.4 Graph matching algorithm

Without a priori information, initially every feature with similar attributes are candidate for pairing with each other.

Therefore it is not enough to initialize the match matrix with 1’s and 0’s only. The problem should be transformed from a discrete (binary) one to a continuous formulation. In this case the elements of the match matrix are represented by continuous real values during the minimization.

The cost function of the matching can be written into the following form:

E(M) = −1 2

Na

X

a=1 Nb

X

i=1 Na

X

b=1 Nb

X

j=1

M(a,i)M(b,j) XR r=1

C(2,r)(a,i,b,j)

Na

X

a=1 Nb

X

i=1

M(a,i) XS

s=1

C(2,s)(a,i) (2.3)

+

Nα

X

a=1

µa







Nβ+1

X

i=1

M(a,i)







+

Nβ

X

i=1

νi







Nα+1

X

a=1

M(a,i)







+1 κ

Nα+1

X

a=1 Nβ+1

X

i=1

M(a,i) log M(a,i) (2.4) where a,b and i,j are indices of vertices of the Gαand Gβ, respectively. The dissimilarity of the elements are involved by C() cost functions, the elements of the match matrix is scaled such that 0≤M(a,i)≤1. The first and second term compare the edges and vertices, respectively. The last three terms are the constraints to the elements regarding the match matrix, theµa, νiare Lagrange multiplicators.

The minimization is achieved by an application of a deterministic annealing method as proposed in [38]. This is a “clocked” algorithm, during the calculation of the approximate mapping between features, the elements of the matching matrix are held fixed. In contrary, updating the elements of the matching matrix, the transformation elements are treated as constants.

The parameterκis related to the inverse of the “temperature”, increasing its value push to elements of the match matrix towards binary value. The steps of the algorithm are as follow.

1. Initializeκfor example to 0.001 and M(a,i)=1+Nα1Nβ

2. Update: M(l)(a,i)=exp κ(B(l)(a,i)−µa−νi)−1where l denotes the level of iteration and B(l)(a,i)=

Na

X

b=1 Nb

X

j=1

M(l1)(b,j) XR

r=1

C(2,r)(a,i,b,j)− XS

s=1

C(2,s)(a,i)

3. Normalize the rows and columns of M to 1 iteratively. This replaces the calculation of the Lagrange multipli-cators [38].

4. Increaseκ, such thatκ=κ(1+ǫ)

5. Check convergence. If further processing required go to 2.

6. Post processing, create a binary valued matrix from match matrix using the Winner-Takes-All (WTA) algorithm.

Iteratively search for a maximum element in the M, set the element to 1 and all of the remaining elements in that row and column to 0. If the maximum found in the extra row (column), set the element to 0 only in the corresponding column (row).

On the output the matched features can be determined searching for the 1’s in the matrix.