• Nem Talált Eredményt

MARKOV CHAINS

In document Maintenance systems (Pldal 42-45)

2. USE OF THE MARKOV CHAIN IN THE DEVELOPMENT OF DECISION

2.1. MARKOV CHAINS

In the real world, there are a multitude of phenomena from different fields such as management, economics, social structures that cannot be characterized deterministically, random walk is required. Therefore, stochastic processes are used in the study of these phenomena.

Definition 1. A stochastic process is called a random experiment that consists of a series of random sub-experiments. A special class of such processes is represented by the Markov chains.

Many random experiments are conducted in stages. Therefore, such an experiment can be considered as a sequence of sub-experiments and each result of the experiment is determined by the results of the sub-experiments (in order).

Thus, a stochastic process is a set indexed by random variables, {Xt}, where t runs through a set T called the set of positive indices T = N, and Xt represents a quantitative or qualitative characteristic of the system researched.

We have a succession of experiments with the same possible results. Therefore, t will be considered as a time point taking values 1, 2, 3, ..., n. This sequence gives the sequence of experiments. For each moment, the possible results will be noted 1, 2, ..., m (m = finite number). The best possible results will be called the states in which the system can be at any given time. The unit of measure for successive times t depends on the system studied.

Among the types of such sequences we can mention the one in which the probabilities of the results at one point are independent of the results of the previous experiments (for example: repeatedly throwing a dice, extracting a ball from the ballot box with a comeback).

Another type of sequence is one in which the probabilities of the results at a given time depend on the results from previous experiences (for example: successive extractions from the ballot without return). In this type of experiment two extreme sub-cases can be distinguished:

1. Extreme - is represented by the fact that the probabilities of the results at a given time depend on the results of all previous experiments in the sequence;

2. The other extreme - of the level of dependence is when the probabilities of the results at a given moment depend only

on the results of the previous experiment. In this situation the sequence of experiments is called the Markov process (chain) [10].

Definition 2. A Markov process or Markov chain is a succession of experiments in which each experiment

42

Szegedi Tudományegyetem Cím: 6720 Szeged, Dugonics tér 13.

www.u-szeged.hu www.szechenyi2020.hu

has m possible outcomes E1, E2, ..., Em, and the probability of each result depends only on the result of the previous experiment. Definition 3. A stochastic process is said to have Markov's property if equality is achieved:

P (Xt + 1 = j / X1 = k1, ..., Xt-1 = kt-1, Xt = i) = P (Xt + 1 = j / Xt = i), for t = 1, 2, ...,n and for any sequence k1, k2, ... kt-1, i, j of states from the set of m possible system states.

Let 2 events, A and B. Note P (A / B) - the probability of event A conditioned by event B.

Markov's property shows that the conditional probability of any future event (Xt + 1 = j), given the past events X1 = k1, ..., Xt-1 = kt-1 and the present state Xt = i, is independent of the past states and it depends only on the present state of the process.

There are a wide variety of phenomena that suggest behaviour in the manner of a Markov process. As examples, we have reproduced the following situations:

• the probability that a person will buy a product of a particular brand (detergent, beer, cosmetics, footwear, etc.) may depend on the brand chosen on the previous purchase;

• the probability that a person has a record may depend on whether or not the parents had a record;

• The likelihood that a patient's health status will improve, worsen, or remain stable in one day may depend on what happened the previous day.

The evolution of a Markov process can be described by means of a matrix.

The transition matrix is a very efficient tool for representing the compartment of a Markov process.

Definition 4. Let a Markov process that has m possible mutually exclusive results E1, E2,…, Em. The general form of a transition matrix for this kind of experiments has the form:

Future Status

Current Status =P

With system modelling, it can be in one of the most current possible states. A state corresponds to a result of the experiment. At the end of the experiment, the system may be in one of the states.

The transition matrix is made up of elements pij which represents the conditional probability that its system will change

from the initial state to the future state j.

2.1.1. Observation

1. Pij with i = j represents the probability that the system will remain in the same state after the experiment is performed, and Pij with i ≠ j represents

43

Szegedi Tudományegyetem Cím: 6720 Szeged, Dugonics tér 13.

www.u-szeged.hu www.szechenyi2020.hu

the probability that the system will change from one state to another.

2. The transition matrix is a square matrix of order m.

Properties

The elements of the transition matrix must satisfy the following:

1. 0 ≤ pij ≤ 1, i,j = 1,…,m (because it's all about probabilities),

2. , i=1,2,…m. The line amount must give 1 because E1, E2, ... Em is a complete system of events. Property 2 ensures that, given a current state i of the system, the system will certainly move into a state j of the most possible after the experiment.

2.1.2. Regular Markov chains

Because Markov chains are stochastic processes, it is not possible to know exactly what is happening in each state; therefore, the system must be described in terms of probability.

Definition 6. Let a Markov chain with m states. A state vector for the Markov chain is a probability vector X = șx1 x2… xnţ. The coordinates xi of the state vector X must be interpreted as the probability that the system is in state i.

When it is certainly known that the system is in a certain state, the state vector has a particular shape. Thus, if one knows for sure that the system is in the i-a state, the state vector will have an i-a component equal to 1, and the rest 0. X = [0 0 0 ... i ... 0].

The behaviour of a Markov chain can be described by a sequence of state vectors. The initial state of the system can be described by a state vector noted X0. After a transition the system can be described by a vector X1, and after k transitions, by the state vector Xk. The relationship between these vectors can be summarized by the following theorem:

Theorem 2.

Let a Markov process with the transition matrix P. If Xk and Xk + 1 are state vectors that describe a process after k and k + 1 transitions, respectively, then

In particular:

44

Szegedi Tudományegyetem Cím: 6720 Szeged, Dugonics tér 13.

www.u-szeged.hu www.szechenyi2020.hu

So the state vector Xk that describes the system after k transitions is the product between the initial state vector X0 and the matrix Pk.

Observation. X0, X1, X2, ... Xk, ... are all 1 × m line vectors.

If you are interested in studying a stochastic process after a large number of transitions, then it is useful to study its general behaviour in the long term. For certain types of Markov chains this is possible.

Generally for a Markov chain with m states, the possibility that the system is in state j after k transitions depends on the state from which it started. Thus, p1j (k) is the probability that the system will be in state j after k transitions if it is initially in state 1. Similar meanings have for p2j (k), ..., pmj (k). There are no reasons that these probabilities are (or are expected to become) equal. But for some Markov chains there is a strictly positive probability qj associated with the state j so that after k transitions the probabilities pij (k) all become very close to qj. In other words, the expectation that the system will reach state j after k transitions (where k is sufficiently large) is about the same, regardless of the state from which it starts.

Markov chains that have such a long-term behaviour form a separate class that is defined as follows.

Definition 7. A Markov chain with the transition matrix P is called regularly if there is an integer k positive so that Pk has all the elements strictly positive.

In document Maintenance systems (Pldal 42-45)