• Nem Talált Eredményt

4.3 Spatial movement

4.3.3 Conditions of regularizability

59 4. Regularization of the inverse positioning problem

that are all orthogonal to the vector p(θ) −q(θ), so they are from the plane orthogonal top(θ)−q(θ). Since the image space ofJVe is spanned by the vectors (4.92-4.94), and these vectors are from a plane, the rank of the matrix is at most two.

Spherical manipulators have another unique property:

Proposition 12. Manipulators have joint axes intersecting at the same point if and only if there is a point on the last segment for whichJVe = 0.

Proof. If the joint axes of the manipulators intersect at some pointq(θ) in the joint configurationθ∈ R3, while the end effector position isp(θ), then the linear velocity generators are defined by (4.92)-(4.94). Since q(θ) is on the last joint axis as well, it is on the last segment, so by choosingp(θ) =q(θ), the generators in (4.92)-(4.94) are all zero, soJVe = 0.

Suppose that for a point on the last segment JVe = 0 in the joint configurationθ ∈ R3. Let this point bes(θ). Since JVe = 0, this implies that

ve1(θ) = ωe1(θ)×(s(θ)−q1(θ)) = 0 (4.95) ve2(θ) = ωe2(θ)×(s(θ)−q2(θ)) = 0 (4.96) ve3(θ) = ωe3(θ)×(s(θ)−q3(θ)) = 0. (4.97) So for every jointi∈ {1,2,3}, eitherωie(θ)is parallel to(s(θ)−qi(θ))or s(θ) =qi(θ). Ifs(θ) =qi(θ), that means thats(θ)is on theith joint axis.

Ifωie(θ)is parallel to(s(θ)−qi(θ))that also means thats(θ)is on theith joint axis, sinceqi(θ)is on theith joint axis. Sos(θ)is a point on all joint axes, thus the joint axes intersect in the same point.

60 4. Regularization of the inverse positioning problem

it can be written as

JregV Ad(I,γr)Je, (4.99) whereπV is the projector matrix

πV =

 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0

. (4.100)

Since the rank of Jreg can not be greater than the minimal rank of the matrices in the right-hand side of (4.99), and rankπV = 3, while rank Ad(I,γr) = 6, sinceAdis an automorphism (that yields it is invert-ible), andrankJe≤3, the rank ofJregis

rankJreg≤min

rankπV

| {z }

=3

,Ad(I,−γr)

| {z }

=6

,rankJe

= rankJe. (4.101)

SorankJreg = 3implies thatrankJe = 3.

Recall that by Proposition 5 the ranks of the end effector Jacobian, spatial manipulator Jacobian and body manipulator Jacobian are iden-tical. This implies that if the task Jacobian is regularizable, thenrankJs= 3and rankJb = 3as well.

SinceJreg = Ad(I,γr)Je

V, and(I,−γr) is a translational genera-tor, the regularized Jacobian has the columns

Jreg = v1e+γωe1×r ve2+γωe2×r ve3+γω3e×r

= v1e v2e v3e

+ γr×ωe1 γr×ω2e γr×ω3e (4.102) and by introducing the following notation

γJe ×r= γωe1×r γω2e×r γω3e×r

, (4.103)

Jreg takes the short form

Jreg=JVe +γJe ×r. (4.104) Even if the task Jacobian is regularizable, it is important to know how to get the regularization vectorr. The following theorem constricts the search space for the regularization vector, by stating that the reg-ularization vector must be the linear combination of the linear velocity generators, i.e. it must lie in the image space ofJVe.

Theorem 7. Ifr∈R3 is a regularization vector, thenr∈RanJVe.

61 4. Regularization of the inverse positioning problem

Proof. Since the regularized Jacobian is

Jreg=JVe +γJe ×r, (4.105) its range space is

RanJreg⊆RanJVe ∪Ran (Je ×r) (4.106) where ∪ is the subspace union. Suppose indirectly, that r /∈ RanJVe. Note that this impliesr 6= 0, since the zero vector is element of every subspace. SinceJe ×r is orthogonal tor,r /∈Ran (Je ×r), however be-cause of (4.106) it follows thatr /∈RanJreg. This implies thatJregis not full rank, that contradicts with the statement thatris a regularization vector.

Theorem 6 showed that a necessary condition for regularizability is thatJe is full rank. This implies that the kernels of the Jacobians JVe andJe has only one common element, that is the zero vector:

Lemma 2. IfrankJe = 3, then there exists no nonzeron∈R3, such that n∈KerJVe andn∈KerJe.

Proof. Suppose indirectly, that n 6= 0 ∈ R3, and n ∈ KerJVe and n ∈ KerJeΩ. Since the end effector Jacobian is

Je= JVe

Je

, (4.107)

nis in the Kernel ofJe, since Jen=

JVe Je

n=

JVen Jen

= 0, (4.108)

which implies thatJeis not full rank, that is a contradiction.

Lemma 3. LetQ6= 0be a3×3symmetric real matrix with zero elements in the diagonal, andS be a subspace ofR3 with dimS ≤2, and λ∈R3. Then the solution of the quadratic equation< Qλ, λ >= 0restricted toS is one of the following:

1. A line passing through the origin.

2. The origin.

3. Two lines passing through the origin.

62 4. Regularization of the inverse positioning problem

Proof. LetPSbe the orthogonal projection to the subspaceS, thendimS ≤ 2 implies rankPS ≤ 2. The variables λ restricted toS arePSλ, so the equation< Qλ, λ >= 0restricted toScan be written as

< QPSλ, PSλ >=< PSQPSλ, λ >= 0. (4.109) Introducing QS = PSQPS, the quadratic equation restricted to S be-comes

< QSλ, λ >= 0. (4.110) SinceQ6= 0 is symmetric, with zero diagonals, its rank is at least two, while rankPS ≤ 2, so by the rank nullity theorem and the property of product rank, 1 ≤ rankQS ≤ 2. Moreover, QS is also a symmetric real matrix, so its eigenvalues are real, and it is diagonalizable, i.e. it can be written in the form

QS =U

κ1 0 0 0 κ2 0

0 0 0

U (4.111)

withU U=I, andκ1, κ2 ∈R. Substituting this form into the quadratic equation results in

λU

 κ1 0 0 0 κ2 0

0 0 0

Uλ= 0. (4.112)

Introducing the rotated variablesλ˜ = (˜λ1,λ˜2,˜λ3) defined as˜λ=Uλ, the quadratic equation becomes

λ˜1 λ˜2 ˜λ3

 κ1 0 0 0 κ2 0

0 0 0

 λ˜1 λ˜2 λ˜3

=κ1λ˜212λ˜22= 0. (4.113)

Note that since the component ˜λ3 ∈/ S, the subspace S after the trans-formationU is spanned by˜λ1 and˜λ2.

Suppose that κ2 = 0. In this caseκ1 6= 0, because this would imply Q = 0, that is a contradiction. The quadratic equation (4.113) reduces toκ1˜λ21 = 0, and the solution is ˜λ1 = 0, ˜λ2 ∈ R, that is a line passing through the origin.

Suppose that κ1 = 0. In this caseκ2 6= 0, because this would imply Q = 0, that is a contradiction. The quadratic equation (4.113) reduces toκ2˜λ22 = 0, and the solution is ˜λ1 ∈ R,λ˜2 = 0, that is a line passing through the origin.

Suppose thatκ1 6= 0andκ2 6= 0, andκ1κ2 >0, so they have the same sign. Then the only real solution to the quadratic equation (4.113) is the pointλ˜1 = ˜λ2 = 0, that is the origin.

63 4. Regularization of the inverse positioning problem

Suppose thatκ1 6= 0andκ2 6= 0, andκ1κ2<0, so they have different sign. Then the only real solution to the quadratic equation (4.113) is

˜λ1 =±p

κ21˜λ2, that is two lines passing through the origin.

Theorem 8. Suppose that for a spatial manipulator with the joint axes not meeting in one point either Q 6= 0 or dim KerJVe = 2. Then JVe is regularizable if and only ifJeis full rank.

Proof. IfJVe is regularizable, thenJeis full rank because of Theorem 6, so only the other direction needs to be proved. Suppose, thatJe is full rank. Then the regularized Jacobian is

Jreg=JVe +γ(Je ×r), (4.114) which is regular if and only if

Jregλ= 0 (4.115)

implies λ = 0. It will be proved, that there exists r ∈ R3 and γ ∈ R, such that (4.115) holds if and only ifλ= 0. Note thatλ= 0 is a trivial solution, and throughout the proof, all nonzeroλs will be analyzed, and it will be shown, that ifλ6= 0, then for suitably chosenr ∈R3 andγ∈R (4.115) does not hold.

Substituting (4.114) into the condition (4.115), it can be rephrased asJreg is regular if and only if

JVeλ+γ(Jeλ)×r= 0 (4.116) impliesλ= 0. This condition can be further manipulated to get

JVeλ=−γ(Jeλ)×r. (4.117) Because of the properties of the vector product, this equation holds if and only if

hJVeλ, ri = 0 (4.118)

hJVeλ, Jeλi = 0 (4.119) and for allλ˜that is the solution of (4.118) and (4.119)

γ =−signD

JVeλ,˜ (Jeλ)˜ ×rE JVeλ˜

(Jeλ)˜ ×r

(4.120)

if(Jeλ)˜ ×r 6= 0.

64 4. Regularization of the inverse positioning problem

Let λV ∈ KerJVe. In this caseJVeλV = 0, so ensuring (JeλV)×r 6= 0 guarantees that (4.117) does not hold. Since the joint axes of the ma-nipulator does not intersect at the same point,dim KerJV ≤2by Propo-sition 12. However, because of Lemma 2, JeλV 6= 0 for any nonzero λV ∈ KerJVe. Since KerJVe is a subspace, and its dimension is not greater than two, the set {JeλV : λV ∈ KerJVe} is a plane, a line or a point containing the origin (in the last case it is the origin itself). If r is chosen such that it is perpendicular to {JeλVV ∈ KerJVe}, then (JeλV)×r 6= 0, and (4.120) yields γ = 0 (by substituting λ˜ := λV).

So ifr is perpendicular to {JeλV : λV ∈ KerJVe}, then choosing γ 6= 0 guarantees that (4.117) does not hold for anyλV ∈KerJVe.

Let S = (KerJVe)be the orthogonal complement ofKerJVe, i.e. S is a subspace ofR3,S∪KerJVe =R3 and every vector fromSis orthogonal to every vector fromKerJVe. Thus any vector fromR3 can be written as λ = cVλV +cSλS, with λV ∈ KerJVe, λS ∈ S for some cV, cS ∈ R. For every nonzeroλS ∈S,JVeλS 6= 0, so(JeλS)×r = 0ensures that (4.117) does not hold, so for everyλS ∈S, the condition(JeλS)×r6= 0need not be satisfied. Note that even if (4.118) and (4.119) holds for someλS ∈S, but(JeλS)×r= 0, this means thatJeλS andrare parallel, thus (4.117) does not hold.

Suppose that dim KerJVe = 2. This yields that dim RanJVe = 1and dimS= 1, thus the set{JVeλSS ∈S}is a one-dimensional subspace (a line passing through the origin) in the image space ofJVe. Hovewerr is also in the image space ofJVe because of Theorem 7, and since the image space is one-dimensional, hJVeλS, ri 6= 0 for any nonzero λS ∈ S and nonzeror ∈R3, thus (4.118) can not hold for any nonzeroλS ∈S. Since JVe(cVλV +cSλS) =cSJVeλS, this concludes the proof ifdim KerJVe = 2.

Ifdim KerJVe <2, thenQ6= 0because of the condition of the theorem.

Notice, that because of the definition of matrix transpose, (4.119) can be rearranged to get D

λ,(JVe)Je, λE

= 0. (4.121)

Because in a homogeneous quadratic form the matrix can be replaced with its symmetrical part, (4.121) can be written as

Dλ,1/2

(JVe)Je +JVe(Je) , λE

= 0, (4.122)

and because of the definition ofQ, this equation simplifies to the quadratic form

hλ, Qλi= 0. (4.123)

Suppose, thatdim KerJVe = 1. Then the condition (4.118) becomes hJVe(cVλV +cSλS), ri=hJVecSλS, ri= 0. (4.124)

65 4. Regularization of the inverse positioning problem

Since dim KerJVe = 1 implies dim RanJVe = 2, and dimS = 2, the set {JVeλS : λS ∈ S} ⊆ RanJVe is a two-dimensional subspace, while r ∈ RanJVe spans a one-dimensional subspace (since it is a vector in the im-age space ofJVe), so the setλ˜Sfor which (4.124) holds is a one-dimensional subspace, lying inRanJVe being orthogonal to the vector(JVe)r. Since

˜λSis a one-dimensional subspace, it is a line passing through the origin.

Identifyλ˜S with the direction vector of this line. Thus condition (4.123) needs to be examined only on the two-dimensional subspace spanned by the vectors˜λS and λV. Let this subspace be denoted byS. Then the˜ solution of the quadratic equation (4.123) restricted toS˜by Lemma 3 is either two lines, one line, or a point, all containing the origin. If there is a nontrivial solution (two lines or one line), then letλ˜ be the set of the unit direction vectors of the lines. Then ifγ is chosen such that (4.120) does not hold, then (4.117) does not hold, and this concludes the proof if dim KerJVe = 1.

Suppose that dim KerJVe = 0. In this casedimS = 3, and r can be chosen arbitrarily. Letr ∈R3be fixed. Then the condition (4.118) is only satisfied on the set{λ˜S : ˜λS ∈S,˜λS⊥(JVe)r}, that is a two-dimensional subspace ofR3, denote it byS. Then the solution of (4.123) restricted to˜ S˜is either two lines, a line or a point containing the origin by Lemma 3. If there is a nontrivial solution (two lines or one line), then letλ˜ be the set of the unit direction vectors of the lines. Then ifγ is chosen such that (4.120) does not hold, then (4.117) does not hold, and this concludes the proof ifdim KerJVe = 0.

4.3.4 The determinant and the singular values of the