• Nem Talált Eredményt

The steps of the presented analysis procedure are significantly less computationally demand-ing than the past methods published in the literature considerdemand-ing the same queuedemand-ing system.

In this section we compare our procedure with three prior methods: the method of [3]

(transformed to continuous time), its improved version published in [45], and the procedure of [48]. Note that the latter two procedures are far less general than [3] or the proposed one.

They can handle only preemptive resume service, they do not analyze the sojourn time at all, and [48] is only able to provide the moments of the number of customers.

Since all involved procedures are exact, only the scalability is investigated, that is, the analysis time as the function of the number of phases.

For this purpose let us define theMMAPmatrices as

D0(K) =

and matrix DH(K) is defined similarly. The diagonal entries denoted by• are determined uniquely such that the row sums ofD0(K)+DL(K)+DH(K)are zeroes.

The service times are characterized by order-2PHdistributions with parameters σH =h0.16667 0.83333

having service ratesµL = 2.8andµH = 2. The utilization depends onK, it varies between 0.6and0.75.

Figure34depicts the analysis time required to obtain the first 10 moments of the number of low priority customers in the system in the preemptive case as the function ofK2. (This is the only performance measure that is supported by all the procedures). It is clearly visible that the presented method is at least an order of magnitude faster than the prior ones, and is able to solve systems with a large number of phases. No numerical problems were encountered even with the largest model. Additionally, as opposed to [45] and [48], the presented procedure can provide sojourn time related performance measures, and is able to handle the case of non-preemptive service as well.

7.4 departure process analysis of priority qeues

This section describes a method to obtain the multi-class lag-1joint moments of the inter-departure times of priority queues. From these moments it is possible to create aMMAP (based on the results of Section 3.2.3) in order to approximate the departure process in a Markovian way.

The approach to derive the multi-class joint moments of the inter-departure times is similar to the one presented in5.3.3 for the single-class case. For simplicity, we discuss only the

2 In our MATLAB implementation theNAREproblems are solved by the ADDA procedure [86] and the Sylvester equations are solved by thelyapfunction of MATLAB, which is based on the Hessenberg-Schur algorithm [33].

7.4 departure process analysis of priority qeues 113

0 100 200 300 400

0 100 200

Size of the MMAP (K)

Executiontime[s]

Presented Method [48]

Method [45]

Method [3]

Figure 34.: Comparison of the execution times of various procedures

two-class case with preemptive resume priority, but the procedure itself can be extended to handle more general systems as well.

The main idea is that the stochastic behavior of two consecutive departure intervals is inde-pendent on the number of customers in the queue when there are at least two customers. The reason is that the system cannot become idle during the two consecutive departure intervals in this case. As a consequence, we have to distinguish just six cases, as follows:

• 0, 0: the last departure left the system empty,

• 1, 0: at the last departure one high and zero low priority customers are left in the system,

• 1, 1+: at the last departure one high and at least one low priority customers are left in the system,

• 2+, 0+: at the last departure at least two high priority customers are left in the system,

• 0, 1: at the last departure zero high and one low priority customers are left in the system,

• 0, 2+: at the last departure zero high and at least two low priority customers are left in the system.

(Note that there were only three cases to distinguish in the single-class MAP/MAP/1 system.) The analysis method presented in Sections7.1and7.2provides only per-class performance measures and does not allow the analysis of the joint behavior of the priority classes. Hence, a different approach is needed to obtain the probabilities of the above listed these six cases.

The approach we are going to use is based on [3], where the analysis of the discrete time DMAP/PH/1 priority queue is presented. In contrast to [3], here we consider the continuous time model and extend the results of that paper in several ways. We provide new closed formulas and more efficient algorithms than the existing ones.

7.4.1 The MMAP[K]/PH[K]/1 preemptive priority queue as a QBD process

It is possible to define a three dimensionalCTMCto model the queue length behavior. One dimension keeps track of the length of the high priority queue, the second one the length of the low priority queue, and the third dimension describes the phase of the arrivalMMAP together with the phases of the low and high priority servicePHdistributions.

114 analysis of the mmap[k]/ph[k]/1 priority qeue

With proper numbering of the states the structure of the generator of this Markov chain is

Q=

where the blocks of the generator are infinite matrices corresponding to the same number of high priority customers but different number of low priority customers and different phases of the arrival and service processes. The blocks ofQare defined as

F =diaghEHi (280)

With these definitions matricesEL(andEH) contain the transition rates accompanied by low (and high) priority arrivals, whileJ1(L)(andJ1(H)) are the ones accompanied by low (and high) priority services, respectively.

Since the generator is a QBD (with infinite number of phases), the solution is matrix-geometric (see Section4.1.3), thus we have

yk =y0Rk , k≥0, (285)

whereyk is the vector of the steady state probability of the states withkhigh priority cus-tomers. This vector can be partitioned according the number of low priority customers, yk = {yk,j, j ≥ 0}, whereyk,j denotes the vector of steady state probabilities for the states withkhigh andjlow priority customers. Furthermore, we denote the marginal steady state probability vectors of the classes as

y(iH)=

yi,j, y(iL) =

yj,i.

7.4 departure process analysis of priority qeues 115 Due to the definition of the blocks of the generator, both matricesRandGexhibit an upper-block-Toeplitz structure, since the number of low priority arrivals and the number of cus-tomers in the system are independent given the phase of the arrival and service processes (shown in [3]). Thus we have:

G=

As in every homogeneousQBD, these matrices are the minimal non-negative solutions to the matrix quadratic equations (see (123) and (128))

0=B+LG+FG2. (287)

0= F+RL+R2B, (288)

Applying the definitions of matrices B,LandF, and exploiting the upper-block-Toeplitz structure of matricesRandGwe can derive relationships for matricesR(iL)andG(iL),i≥0 (see [3]).

The equations for matricesGi(L)are as follows:

fori=0 : 0= J1(H)+ (E0+J0(H))G0(L)+EHG0(L)2, (289) fori>0 : 0=ELGi(L)1+ (E0+J0(H))G(iL)+EH

i k=0

G(kL)G(iL)k. (290) MatricesGi(L)have important probabilistic interpretations. Entry(a,b)of matrixGi(L)is the conditional probability that starting from levelnwith the background process being in phase a, (1) the first visit to level n−1 occurs in phase b, (2)ilow probability customers arrive during the first passage time.

The following expressions can be obtained for matricesR(iL):

fori=0 : 0=EH+R(0L)(E0+J0(H)) +R(0L)2J1(H), (291) fori>0 : 0=R(iL)1EL+R(iL)(E0+J0(H)) +

i k=0

R(kL)R(iL)kJ1(H). (292) Interestingly, summing up equations (290) from i = 1to ∞ and adding (289) leads to a matrix quadratic equation for(L)=i=0G(iL), since

0= J1(H)+ (E0+EL+J0(H))Gˆ(L)+EH(L)2. (293) Similarly, the sum of matricesR(iL), denoted by(L)=i=0R(iL), can be obtained as the minimal non-negative solution of a matrix quadratic equation as well, as

0=EH+Rˆ(L)(E0+EL+J0(H)) +Rˆ(L)2J1(H). (294) The generating function of matrix seriesG(iL)defined byG(L)(z) =k=0zkGk(L)will be used several times in the sequel. From (290) and (289) we have thatG(L)(z)is the solution of the following matrix-quadratic equation:

0= J1(H)+ (E0+zEL+J0(H))G(L)(z) +EHG(L)(z)2. (295)

116 analysis of the mmap[k]/ph[k]/1 priority qeue

Finally, based on (285) and the block structure of matrix R the steady state probability vector of the number of high and low priority customers in the system can be expressed as

yi,j =

j k=0

yi1,kR(jL)k, i≥1,j≥0, (296)

To completely characterize the joint distribution of the number of high and low priority customers, it remains to derive the stationary probability vector at level0,y0 ={y0,j, j≥0}. 7.4.2 Analysis of level zero

Relations for vectory0can be derived from the boundary equationsyQ=0as

y0L0+y0RB =0, y0,01=1−λ(H)(H)λ(L)(L). (297) Due to the structure of matrices L0, B and R, (297) is equivalent to the solution of an M/G/1 typeCTMC(see [90]). However, by using the relationRB = FG(see e.g. [57], page 144) it is possible to re-formulate (297) and the corresponding M/G/1 typeCTMCto a more appropriate form. The equations fory0by usingGinstead ofRare then

y0L0+y0FG =0, y0,01=1−λ(H)(H)λ(L)(L), (298) and the generator of the related M/G/1 typeCTMCbecomes

Q0=L0+FG= The solution of M/G/1-type Markov chains is based on their invariant matrixGH0whose entry in position(i,j)is the probability that starting in phaseiat levelnthe first phase visited at leveln−1is statej[57]. (Hence, this matrix plays the same role as the matrixGofQBDs).

In this particular M/G/1-type system matrixGH0 is the minimal non-negative solution to the matrix polynomial equation

0= J1(L)+ (E0+J0(L))GH0 +ELGH02+EH

i=0

Gi(L)GH0i·GH0. (300) The stationary probability vectorsy0,ican be calculated recursively using the Ramaswami formula [70]. Tailoring it to this particular system gives

y0,i =

where matricesTiare defined by Ti=

7.4 departure process analysis of priority qeues 117

and, vectory0,0is the solution of the linear system y0,0

(Observe that the matrix in the parenthesis is the generator of the background process re-stricted to level(0, 0)).

7.4.3 The joint moments of the departure process

As shown in Section5.3.3, the departure process of a MAP/MAP/1 queue is an infinite state MAP. Similarly, the departure process of the MMAP[K]/PH[K]/1 priority queue is an infinite MMAPas well. ThisMMAPgenerates a high priority (low priority) arrival when a high pri-ority (low pripri-ority) departure occurs in the system. The computation of the joint moments of the departure process,ηi,j(L),ηi,j(H), is rather difficult based on this infiniteMMAP representa-tion. Instead of using this representation directly, we construct a finiteMMAP(considering the 6 cases listed above in the beginning of Section7.4), such that the joint distribution of the first two arrivals of thisMMAP– starting from the appropriate initial distribution – is identi-cal with the one of two consecutive stationary departures of the MMAP[K]/PH[K]/1 priority queue.

The blocks of the matrices of this finiteMMAPare constructed from the block matrices of theQBDdescribing the queue length process ((280)-(283)), such that the transitions between the 6 listed cases are taken into consideration. The initial probability distribution is computed according to the stationary distribution of the queue length process just after a departure considering the 6 listed cases.

The matrices of the resultingMMAPrepresentation are as follows:

H0 =

118 analysis of the mmap[k]/ph[k]/1 priority qeue

where the diagonal blocks ofH0are:

M1= E0,

MatricesH0,HL andHH can be interpreted as follows. The first set of states correspond to case0, 0. From this state, a high priority arrival moves the Markov chain to state1, 0, and a low priority arrival to state0, 1. No service events can occur in0, 0. From state1, 0(second set of states) there are transitions to1, 1+due to a low priority arrival (EL), to2+, 0due to a high priority arrival (EH), and to0, 0due to a high priority service (J1(H)). The latter one is accompanied by a high priority departure event, hence the corresponding matrix block is located inHH. The rest of the blocks can be interpreted similarly.

The steady-state distribution of the MMAP[K]/PH[K]/1 queue just after a departure can be calculated from the stationary distribution as

xi,j = are computed based on (307) and (296) as

x0,0= y1,0J

7.4 departure process analysis of priority qeues 119

x0,1= y1,1J

(H)

1 +y0,2J(1L) λ(L)+λ(H)

= (y0,0R(1L)+y0,1R(0L))J1(H)+y0,2J1(L)

λ(L)+λ(H) , (312)

x0,2+=

j=3y0,jJ(1L)+j=2y1,jJ(1H) λ(L)+λ(H)

= (y(0H)−y0,0−y0,1−y0,2)J(1L)+ (j=0y1,j−y0,0R(0L)−y0,0R(1L)−y0,1R(0L))J(1H) λ(L)+λ(H)

= (y(0H)−y0,0−y0,1−y0,2)J(1L) λ(L)+λ(H)

+ (j=0kj=0y0,kR(jL)k−y0,0R0(L)−y0,0R(1L)−y0,1R(0L))J(1H) λ(L)+λ(H)

= (y(0H)−y0,0−y0,1−y0,2)J(1L)+(y(0H)(L)−y0,0R(0L)−y0,0R(1L)−y0,1R(0L))J(1H) λ(L)+λ(H)

, (313) where vectory(0H)denotes the probability vector that there are no high priority customers in the system, thusy(0H)= i=0y0,i.

Having computed the vectorvand the matricesH0,HH andHL, the joint moments of the departure process are obtained according to (64), hence

η(i,jc)=i!j!v(−H0)i1Hc(−H0)j1, i,j≥0,c={H,L}. (314) 7.4.4 An efficient, truncation-free procedure

Section7.4.2describes how to calculate the steady state joint distribution of the number of customers, based on which Section7.4.3 derives the joint moments of the inter-departure times. These results, however, are not straight forward to implement in an efficient way. The potential numerical pitfalls are:

• To obtain matrixGH0 the matrix polynomial equation (300) has to be solved, but this matrix equation has infinitely many terms, and relies on infinitely many elements of matrix seriesGi(L).

• To obtain vectorsy0,j,j ≥ 0, matricesTi are required (see (301)), that are defined by infinite summations (302).

• To obtain vectorsyi,j,i > 0,j ≥ 0, a convolution like formula needs to be evaluated that gets slower and slower with increasingi(see (296)).

• To obtain vectorv, vectors likex2+,0+are necessary to compute, which involves infinite summations of vectorsy0,j.

The naive numerical implementation of such a procedure evaluates the infinite summa-tions by truncation. In order to avoid the loss of accuracy, the truncation threshold must be high, depending on the parameters (in particular, the load) of the system, which makes the procedure slow.

120 analysis of the mmap[k]/ph[k]/1 priority qeue

Fortunately, it is possible to develop a numerically efficient procedure that addresses all the above listed critical steps without using any truncation. The efficient solution of (300) is addressed in Section7.4.4.1, while the accurate calculation of the vectorsy0,i and further related quantities is discussed in Section7.4.4.2.

7.4.4.1 Two fundamental matrices and their relations

There are two matrices that play key roles in the efficient analysis of the system. One of them isGH0, that is the solution of (300), and the other one is matrixZ, defined by

Z=

i=0

Gi(L)GH0i. (315)

Theorem 33. If the algebraic multiplicities of the eigenvalues of matrices GH0 andZare one, GH0Z=ZGH0 holds.

To prove this theorem we first need the following Lemma:

Lemma 7. If the algebraic multiplicities of the eigenvalues of matrixG(L)(z)and matrixE0+ zEL+EHG(L)(z)are one, the eigenvectors of G(L)(z)and E0+zEL+EHG(L)(z)are the same.

Proof of the Lemma. The proof uses the same techniques as in [47].

Let νi andui be the eigenvalue and the corresponding right eigenvector of G(L)(z)(for simplicity we assume thatG(L)(z)has distinct eigenvalues). AsG(L)(z)satisfies the matrix-quadratic equation of (295),νisatisfies

deth

J1(H)+ (E0+zEL+J0(H))νi+EHνi2 i

=0, (316)

and the associated right eigenvectoruiis the solution of h

J1(H)+ (E0+zEL+J0(H))νi+EHνi2

i·ui =0. (317)

Note that bothνiand vectorsuiare functions ofz.

By substituting (284) into (316) and by some basic manipulations we get det

νiD0+νizDL+νi2DH

| {z }

M1

⊕ sHσH+νiSH

| {z }

M2

I

=0, (318)

from which it follows thatM1M2has a zero eigenvalue. Letδjandσkdenote the eigenval-ues ofM1andM2, respectively. Since the eigenvalues ofM1M2areδj+σk, to have a zero eigenvalue there must existj0 andk0such thatδj0 =−σk0. The eigenvector ofM1belonging toδj0 is denoted byθj0, the one of M2 belonging toσk0 is denoted by ψk0. Let us introduce φi =θj0ψk01.

Now we show thatφiis an eigenvector ofG(L)(z)associated withνi, thus it satisfies (317):

h

J1(H)+ (E0+zEL+J0(H))νi+EHνi2φi

= [IM2I+M1II]·(θj0ψk01)

=θj0⊗(σk0ψk0)⊗1+ (δj0θj0)⊗ψk01=0.

(319)

7.4 departure process analysis of priority qeues 121 since the service process of the low priority class is stopped during the busy period of the high priority class.

The proof will be similar to the one of Lemma7, using the same techniques as in [47] again.

Letλk andvk be the eigenvalue and the corresponding right eigenvector ofGH0 (for sim-plicity we assume thatGH0 has distinct eigenvalues). AsGH0 satisfies the matrix equation of (300),λksatisfies

and the associated right eigenvectorvkis the solution of

" By substituting (284) into (321) and by some basic manipulation we get

det

from which it follows thatN1N2has a zero eigenvalue. Letαjandβhdenote the eigenval-ues ofN1andN2, respectively. Since the eigenvalues ofN1N2areαj+βh, to have a zero Next, we show thatµkis an eigenvector ofZ. Observe thatµkis an eigenvector ofZif and only if it is an eigenvector ofG(L)(z)|z=λk since

122 analysis of the mmap[k]/ph[k]/1 priority qeue

As Lemma7states that the eigenvectors of matrixG(L)(z)and matrixE0+zEL+EHG(L)(z) are the same, it is enough to prove thatµkis an eigenvector of(E0+zEL+EHG(L)(z))|z=λk:

(E0+λkEL+EHG(L)(λk))·µk

=

"

(D0II+λkDLII+

i=0

λik(DHII)(G˜(iL)I)

#

·(ζj0ξh0)

= [N1kI]·(ζj0ξh0) =αj0k(ζj0ξh0).

(325)

AsGH0 andZhave the same eigenvectors, the same matrix diagonalizes them, consequently they commute.

Note that the theorem can be generalized to the case when the eigenvalues are not distinct, but it requires the detailed discussion of the combination of the multiplicities of the eigenval-ues ofM1andM2(N1andN2) that we neglect here.

Based on this commutative property, the next theorem enables the efficient computation ofGH0 andZ.

Theorem 34. MatricesGH0 andZsatisfy the following coupled matrix-quadratic equations:

0= J1(H)+ (E0+J0(H))Z+ELGH0Z+EHZ2, 0= J1(L)+ (E0+J0(L))GH0 +EHZGH0+ELGH02.

(326)

Proof. To obtain equations forZ, we multiply (290) byGH0ifrom the right, sum it fromi=1 to∞, and add (289) to it. By using (315) we get

0= J1(H)+ELZGH0+ (E0+J0(H))Z+EH

i=0

i k=0

Gk(L)Gi(L)kGH0

i. (327)

The last term becomesEHZ2by swapping the sums and exploiting thatGH0andZcommute, providing the first matrix quadratic equation. The second matrix quadratic equation can be obtained from (300), when the definition ofZis applied.

Interestingly, the two matrix equations show perfect symmetry. While the solution of cou-pled Sylvester equations has an extensive literature (e.g. [27, 60,89] are recent methods), there are no methods available for coupled matrix quadratic equations (according to our best knowledge). A very simple method is given by Algorithm6. (We are sure that more efficient solution methods can be developed as well, but even this simple method performs very well in our numerical examples.)

Note that Algorithm6needs only successive solution of matrix quadratic equations, that is much more efficient than the direct application of the results of Section7.4.2both in time and in space requirement, since the infinite series ofGi(L)matrices and the solution of (300) are not needed.

Although the results above have been derived in an algebraic way, matricesGH0 and Z have important probabilistic interpretations. Let us denote by(nH,nL)the set of states in which there arenH high andnL low priority customers in the queue. Then, entry (a,b) of matrixGH0 is the conditional probability that starting from state ain(0, 1)the first visit to (0, 0)occurs in stateb. Similarly, the entry (a,b) of matrixZis the conditional probability that

7.4 departure process analysis of priority qeues 123

Algorithm 6Solving the coupled matrix quadratic equations of (326) procedureGH0,Z= SolveCoupled(E0,EH,EL,J0(H),J1(H),J0(L),J1(L))

k0 GH0(0)I repeat

Solve0=J1(H)+ (E0+J0(H)+ELGH0(k))Z(k+1)+EHZ(k+1)2forZ(k+1) Solve0=J1(L)+ (E0+J0(L)+EHZ(k+1))GH0(k+1)+ELGH0(k+1)2

forGH0(k+1)

kk+1

untilkGH0(k)GH0(k−1)k<eandkZ(k)Z(k−1)k<e returnGH0(k), Z(k)

end procedure

starting from stateain(1, 0)the first visit to(0, 0)occurs in stateb. Using these probabilistic interpretations it is easy to see thatZandGH0 commute: let us investigate the busy period generated by a high and a low priority customer, thus the system is in(1, 1)initially. Since the probability that the first passage to(0, 0)occurs in statebis not affected by the order of service, we immediately have thatGH0Z=ZGH0.

7.4.4.2 The boundary distribution

MatricesTi,i ≥0, that are necessary to compute boundary probabilitiesy0,i, are defined by infinite summations (see (302)). To obtain matricesTiefficiently we introduce closely related matricesAndefined by

An=

i=n

Gi(L)GH0in, n≥0. (328)

Theorem 35. MatricesAncan be obtained recursively as the solutions of discrete Sylvester-type matrix equations

0= (−E0J0(H)EHG0(L))1 ELAn1+EH n1 k

=1

Gk(L)Ank

!

An

+ (−E0J0(H)EHG0(L))1EHAnZ

(329)

forn>0, and forn=0we have thatA0= Z.

Proof. The case ofn=0is trivial (see (315) for the definition ofZ).

To derive equations forn > 0, let us multiply equations (290) byGH0infrom the right and sum them up fromi=nto∞. We get

0=EL

i=n

Gi(L)1GH0in

| {z }

An1

+(E0+J0(H))

i=n

Gi(L)GH0in

| {z }

An

+EH

i=n

i k=0

Gk(L)Gi(L)kGH0in.

124 analysis of the mmap[k]/ph[k]/1 priority qeue

The last term can be transformed further by swapping the order of the two summations, lead-ing to

where the commuting property of matricesZ andGH0 was utilized. Collecting all parts of the equation gives Sylvester equations, establishing the theorem.

Observe that the Sylvester equation for Andepends only on matrices Ai,i < n, that are already available in stepn.

A consequence of Theorem35is that matricesTican be expressed explicitly.

Corollary 18. MatricesTi,i≥0,can be obtained as

Finally, it is possible to solve the sum of matricesAnexplicitly as well, that will be useful later.

Proof. Summing equations (329) from n = 1 to ∞, swapping the sums, and applying the definition of provides the corollary after some simple algebraic manipulations.

Corollary 20. The sum of matricesTi, denoted by =i=0Ti, is given by

=E0+J1(H)+ELGH0+EL+EHA.ˆ (334) Proof. The corollary follows from Corollaries18and19.

7.4 departure process analysis of priority qeues 125

7.4.4.3 Calculating the joint moments of the departure process

The steps to obtain the quantities necessary for the departure process analysis are as follows.

1. First solve matricesGH0 andZwith Algorithm6.

2. Solve the matrix quadratic equations to obtain matrices(L)and(L).

3. Calculate matricesGifori =0, 1, 2based on (289) and (290), matricesRi fori= 0, 1 based on (291) and (292), matrices Ai fori = 1, 2based on (329), and matricesTi for i=1, 2based on Corollary18.

4. Compute matrix by solving (333) and matrix from (334).

After this preparation the required stationary probabilities are calculated.

• Vectorsy0,0,y0,1andy0,2 are computed directly from (304) and (301) using theTi ma-trices computed above.

• To derive vectory(0H)=i=0y0,i, equations (301) are summed up fromi=1to∞, and (304) is added leading to

0=

i=1

i k=0

y0,kTik

| {z }

from (301)

+y0,0T0−y0,0J0(L)

| {z }

from (304)

=

k=0

y0,k

| {z }

y(0H)

i=k

Tik

| {z }

Tˆ

−y0,0J0(L),

(335)

implyingy0(H) =y0,0J0(L)1.

Using the quantities calculated so far, it is possible to obtain all the six components of vector v, namelyx0,0,x1,0,x1,1+,x2+,0+,x0,1andx0,2+. The joint moments of the departure process are given by (314).

Part III

Q U E U E I N G N E T W O R K S

8

Q U E U E I N G N E T W O R K A N A LY S I S B A S E D O N T H E J O I N T M O M E N T S

8.1 traffic based decomposition for the analysis of qeueing networks

8.1 traffic based decomposition for the analysis of qeueing networks