• Nem Talált Eredményt

A GENERALIZED RESIDUAL INFORMATION MEASURE AND ITS PROPERTIES

N/A
N/A
Protected

Academic year: 2022

Ossza meg "A GENERALIZED RESIDUAL INFORMATION MEASURE AND ITS PROPERTIES"

Copied!
19
0
0

Teljes szövegt

(1)

Generalized Residual Information Measure M.A.K. Baig and Javid Gani Dar

vol. 10, iss. 3, art. 84, 2009

Title Page

Contents

JJ II

J I

Page1of 19 Go Back Full Screen

Close

A GENERALIZED RESIDUAL INFORMATION MEASURE AND ITS PROPERTIES

M.A.K. BAIG AND JAVID GANI DAR

P. G. Department of Statistics University of Kashmir Srinagar - 190006 (INDIA)

EMail:baigmak@yahoo.co.in javinfo.stat@yahoo.co.in

Received: 15 April, 2008

Accepted: 26 June, 200

Communicated by: N.S. Barnett

2000 AMS Sub. Class.: 60E15, 62N05, 90B25, 94A17, 94A24.

Key words: Shannon entropy, Renyi entropy, Residual entropy, Generalized residual entropy, Life time distributions.

Abstract: Ebrahim and Pellery [7] and Ebrahim [4] proposed the Shannon residual entropy function as a dynamic measure of uncertainty. In this paper we introduce and study a generalized information measure for residual lifetime distributions. It is shown that the proposed measure uniquely determines the distribution function.

Also, characterization results for some lifetime distributions are discussed. Some discrete distribution results are also addressed.

(2)

Generalized Residual Information Measure M.A.K. Baig and Javid Gani Dar

vol. 10, iss. 3, art. 84, 2009

Title Page Contents

JJ II

J I

Page2of 19 Go Back Full Screen

Close

Contents

1 Introduction 3

2 Characterization of Distributions 6

2.1 Continuous Case . . . 6 2.2 Discrete Case . . . 11

3 A New Class of Life Time Distribution 14

(3)

Generalized Residual Information Measure M.A.K. Baig and Javid Gani Dar

vol. 10, iss. 3, art. 84, 2009

Title Page Contents

JJ II

J I

Page3of 19 Go Back Full Screen

Close

1. Introduction

LetXbe an absolutely continuous non-negative variable describing the random life- time of a component. Let f(x) be the probability density function, F(x) be the cumulative distribution andR(x)be the survival function of the random variableX.

A classical measure of uncertainty for X is the differential entropy, also known as the Shannon information measure, defined as

(1.1) H(X) = −

Z

0

f(x) logf(x)dx.

IfX is a discrete random variable taking valuesx1, x2, ..., xnwith respective proba- bilitiesp1, p2, ..., pn, then Shannon’s entropy is defined as

(1.2) H(P) =H(p1, p2, ..., pn) =−

n

X

k=1

pklog(pk).

Renyi [11] generalized (1.1) and defined the measure

(1.3) Hα(X) = 1

α(1−α)log Z

0

fα(x)dx, α >1

and in the discrete case

(1.4) Hα(X) = 1

α(1−α)log

n

X

k=1

pαk, α >1.

Furthermore, in the continous case

(1.5) lim

α→1Hα(X) =− Z

0

f(x) logf(x)dx=H(X)

(4)

Generalized Residual Information Measure M.A.K. Baig and Javid Gani Dar

vol. 10, iss. 3, art. 84, 2009

Title Page Contents

JJ II

J I

Page4of 19 Go Back Full Screen

Close

and in discrete case

(1.6) lim

α→1Hα(X) = −

n

X

k=1

pklog(pk) =H(P), which is Shannon’s entropy in both cases.

The role of differential entropy as a measure of uncertainty in residual lifetime distributions has attracted increasing attention in recent years. As stated by Ebrahimi [4], the residual entropy at a timetof a random life timeX is defined as the differ- ential entropy of(X/X > t). Formally, for allt > 0, the residual entropy ofX is given by

(1.7) H(X;t) = −

Z

t

f(x)

R(t)logf(x) R(t)dx or

H(X;t) = 1− 1 R(t)

Z

t

f(x) logh(x)dx,

whereh(t) = R(t)f(t) is the hazard function or failure rate of the random variable X.

Given that an item has survived up to t, H(X;t) measures the uncertainty of the remaining lifetime of the component.

In the case of a discrete random variable, we have

(1.8) H(tj) = −

n

X

k=j

P(tk)

R(tj)log P(tk) R(tj), whereR(t)is the reliability function of the random variableX.

Nair and Rajesh [9] studied the characterization of lifetime distributions by using the residual entropy function corresponding to the Shannon’s entropy. In this sequel,

(5)

Generalized Residual Information Measure M.A.K. Baig and Javid Gani Dar

vol. 10, iss. 3, art. 84, 2009

Title Page Contents

JJ II

J I

Page5of 19 Go Back Full Screen

Close

we investigate the problem of the characterization of a lifetime distribution using the following generalized residual entropy function:

(1.9) Hα(X;t) = 1

α(1−α)log R

t fα(x)dx Rα(t)

, α >1.

Asα→1, (1.9) reduces to (1.7).

The measure (1.9) is the residual life entropy corresponding to (1.3).

(6)

Generalized Residual Information Measure M.A.K. Baig and Javid Gani Dar

vol. 10, iss. 3, art. 84, 2009

Title Page Contents

JJ II

J I

Page6of 19 Go Back Full Screen

Close

2. Characterization of Distributions

2.1. Continuous Case

LetXbe a continuous non-negative random variable representing component failure time with failure distribution F(t) = P(X ≤ t) and survival function R(t) = 1−F(t)withR(0) = 1. We define the generalized entropy for residual life as

(2.1) Hα(X;t) = 1

α(1−α)log R

t fα(x)dx Rα(t)

, α >1 and so

(2.2)

Z

t

fα(x)dx=Rα(t) exp (α(1−α)Hα(X;t)), α >1.

We now show thatHα(X;t)uniquely determinesR(t).

Theorem 2.1. IfX has an absolutely continuous distribution F(t)with reliability functionR(t)and an increasing residual entropyHα(X;t), thenHα(X;t)uniquely determinesR(t).

Proof. Differentiating (2.2) with respect tot, we have (2.3) hα(t) = αh(t) exp (α(1−α)Hα(X;t))

−(α)(1−α) exp (α(1−α)Hα(X;t))Hα0(X;t), whereh(t) = R(t)f(t) is the failure rate function.

Hence for a fixedt >0,h(t)is a solution of (2.4) g(x) = (x)α−αxexp (α(1−α)Hα(X;t))

+α(1−α) exp (α(1−α)Hα(X;t))Hα0(X;t) = 0.

(7)

Generalized Residual Information Measure M.A.K. Baig and Javid Gani Dar

vol. 10, iss. 3, art. 84, 2009

Title Page Contents

JJ II

J I

Page7of 19 Go Back Full Screen

Close

Differentiating both sides with respect tox, we have

(2.5) g0(x) = α(x)α−1−αexp (α(1−α)Hα(X;t)).

Now forα >1, g(0)≤0, g(∞) = ∞, g(x)first decreases and then increases with minimum atxt = exp (−αHα(X;t)).

So, the unique solution tog(x) = 0is given byx =h(t). ThusHα(X;t)deter- minesh(t)uniquely and hence determinesR(t)uniquely.

Theorem 2.2. The uniform distribution over(a, b), a < bcan be characterized by a decreasing generalized residual entropyHα(X;t) = α1 log(b−t), b > t.

Proof. For the case of uniform distribution over(a, b), a < b,we have

(2.6) Hα(X;t) = 1

αlog(b−t), b > t which is decreasing int.

Also,xt= exp (−αHα(X;t)), therefore,

g(xt) = (xt)α−αxtexp (α(1−α)Hα(X;t))

+α(1−α) exp (α(1−α)Hα(X;t))Hα0(X;t)

= 0.

HenceHα(X;t) = α1 log(b−t)is the unique solution tog(xt) = 0, which proves the theorem.

Theorem 2.3. LetX be a random variable having a generalized residual entropy of the form

(2.7) Hα(X;t) = 1

α(1−α)logk− 1

αlogh(t), whereh(t)is the failure rate function ofX. ThenXhas

(8)

Generalized Residual Information Measure M.A.K. Baig and Javid Gani Dar

vol. 10, iss. 3, art. 84, 2009

Title Page Contents

JJ II

J I

Page8of 19 Go Back Full Screen

Close

(i) an exponential distribution iffk = α1, (ii) a Pareto distribution iffk < α1 and (iii) a finite range distribution iffk > α1.

Proof. (i) LetX have the exponential distribution, f(t) = 1

θexp

− t

θ

, t >0, θ >0.

The reliability function is given by

R(t) = exp

−t θ

and the failure rate function by

h(t) = 1 θ. Therefore, after simplification, using (2.1),

(2.8) Hα(X;t) = 1

α(1−α)logk− 1

αlogh(t), wherek = α1 andh(t) = 1θ.

Thus (2.7) holds.

Conversely, suppose thatk = 1α, then 1

α(1−α)logk− 1

αlogh(t) = 1

α(1−α)log R

t fα(x)dx Rα(t)

, α >1

(9)

Generalized Residual Information Measure M.A.K. Baig and Javid Gani Dar

vol. 10, iss. 3, art. 84, 2009

Title Page Contents

JJ II

J I

Page9of 19 Go Back Full Screen

Close

which gives,

(2.9) h(t) =

1−kα

k(α−1)t+ 1 h(0)

−1

= (at+b)−1,

wherea=

1−kα k(α−1)

= 0, sincek = α1 andb= h(0)1 .

Clearly (2.9) is the failure rate function of the exponential distribution.

(ii) The density function of the Pareto distribution is given by f(t) = (b)1a

(at+b)1+a1

, t ≥0, a > 0, b > 0.

The reliability function is given by R(t) = (b)a1

(at+b)1a

, t ≥0, a >0, b > 0

and failure rate is given by

(2.10) h(t) = (at+b)−1.

After simplification, (2.1) yields

(2.11) Hα(X;t) = 1

α(1−α)logk− 1

αlogh(t), wherek = aα+α−a1 < α1, sinceα >1andh(t) = (at+b)−1.

Thus (2.7) holds.

(10)

Generalized Residual Information Measure M.A.K. Baig and Javid Gani Dar

vol. 10, iss. 3, art. 84, 2009

Title Page Contents

JJ II

J I

Page10of 19 Go Back Full Screen

Close

Conversely, suppose thatk < 1α. Proceeding as in (i), (2.9) gives

(2.12) h(t) =

1−kα

k(α−1)t+ 1 h(0)

−1

= (at+b)−1, wherea=

1−kα k(α−1)

>0, sincek < α1, α >1andb= h(0)1 .

Clearly, (2.12) is the failure rate function of the Pareto distribution given in (2.10).

(iii) The density function of the finite range distribution is given by f(t) = β1

ν

1− t ν

β1−1

, β1 >0,0≤t≤ν <∞.

The reliability function is given by R(t) =

1− t

ν β1

, β1 >0,0≤t≤ν <∞

and the failure rate function by

(2.13) h(t) =

β1

ν 1− t ν

−1

. It follows that

Hα(X;t) = 1

α(1−α)logk− 1

αlogh(t), wherek = αβ β1

1−α+1 > α1, sinceα >1andh(t) = βν1

1− νt−1

. Thus (2.7) holds.

Conversely, supposek > α1. Proceeding as in (i), (2.9) gives

(2.14) h(t) =h(0)

1− kα−1 k(α−1)h(0)t

−1 ,

(11)

Generalized Residual Information Measure M.A.K. Baig and Javid Gani Dar

vol. 10, iss. 3, art. 84, 2009

Title Page Contents

JJ II

J I

Page11of 19 Go Back Full Screen

Close

which is the failure rate function of the distribution given by (2.13), ifk > α1. 2.2. Discrete Case

LetXbe a discrete random variable taking valuesx1, x2, ..., xnwith respective prob- abilitiesp1, p2, ..., pn. The discrete residual entropy is defined as

(2.15) H(p;j) = −

n

X

k=j

pk R(j)log

pk R(j)

.

The generalized residual entropy for the discrete case is defined as

(2.16) Hα(p;j) = 1

α(1−α)log

n

X

k=j

pk

R(j) α

.

Forα→1, (2.16) reduces to (2.15).

Theorem 2.4. If X has a discrete distribution F(t) with support (tj :tj < tj+1) and an increasing generalized residual entropy Hα(X;t) then Hα(X;t) uniquely determinesF(t).

Proof. We have

Hα(p;j) = 1

α(1−α)log

n

X

k=j

pk

R(j) α

and so (2.17)

n

X

k=j

pαk =Rα(j) exp (α(1−α)Hα(p;j)).

(12)

Generalized Residual Information Measure M.A.K. Baig and Javid Gani Dar

vol. 10, iss. 3, art. 84, 2009

Title Page Contents

JJ II

J I

Page12of 19 Go Back Full Screen

Close

Forj+ 1, we have (2.18)

n

X

k=j+1

pαk =Rα(j+ 1) exp (α(1−α)Hα(p;j+ 1)).

Subtracting (2.18) from (2.17), using pj = R(j)−R(j + 1)andλj = R(j+1)R(j) , we have

exp (α(1−α)Hα(p;j)) = (1−λj)α+ (λj)αexp (α(1−α)Hα(p;j + 1)). Hence,λj is a number in(0,1)which is a solution of

(2.19) φ(x) = exp (α(1−α)Hα(p;j))−(1−x)α

−(x)αexp (α(1−α)Hα(p;j+ 1)). Differentiating both sides with respect tox, we have

(2.20) φ0(x) = α(1−x)α−1−α(x)α−1exp (α(1−α)Hα(p;j+ 1)). Note thatφ0(x) = 0gives

x= [1 + exp (−αHα(p;j + 1))]−1 =xj.

Now forα > 1, φ(0) ≤ 0andφ(1) ≤ 0, φ(x)first increases and then decreases in (0,1)with a maximum atxj = [1 + exp (−αHα(p;j+ 1))]−1.

So the unique solution toφ(x) = 0is given byx=xj. ThusHα(X;t)uniquely determinesF(t).

Theorem 2.5. A discrete uniform distribution with support(1,2, ..., n)is character- ized by the decreasing generalized discrete residual entropy

Hα(p;j) = 1

αlog(n−j+ 1), j = 1,2, ..., n.

(13)

Generalized Residual Information Measure M.A.K. Baig and Javid Gani Dar

vol. 10, iss. 3, art. 84, 2009

Title Page Contents

JJ II

J I

Page13of 19 Go Back Full Screen

Close

Proof. In the case of a discrete uniform distribution with support(1,2, ..., n),

Hα(p;j) = 1

αlog(n−j+ 1), j = 1,2, ..., n which is decreasing inj.

Also,

xj = [1 + exp (−αHα(p;j + 1))]−1. Therefore,

φ(xj) = exp (α(1−α)Hα(p;j))−(1−xj)α−(xj)αexp (α(1−α)Hα(p;j+ 1))

= 0

which proves the theorem.

(14)

Generalized Residual Information Measure M.A.K. Baig and Javid Gani Dar

vol. 10, iss. 3, art. 84, 2009

Title Page Contents

JJ II

J I

Page14of 19 Go Back Full Screen

Close

3. A New Class of Life Time Distribution

Ebrahimi [4] defined two nonparametric classes of distribution based on the measure H(X;t)as follows:

Definition 3.1. A random variableX is said to have decreasing (increasing) uncer- tainty in residual life DURL (IURL) ifH(X;t)is decreasing (increasing) int≥0.

Definition 3.2. A non-negative random variable X is said to have decreasing (in- creasing) uncertainty in a generalized residual entropy of orderαDUGRL(IUGRL) ifHα(X;t)is decreasing (increasing) int, t >0.

This implies that the random variableXhas DUGRL(IUGRL), Hα0(X;t)≤0,

Hα0(X;t)≥0.

Now we present a relationship between the new classes and the decreasing(increasing) failure rate class of lifetime distributions.

Remark 1. Ris said to be an IFR(DFR) ifh(t)is increasing(decreasing) int.

Theorem 3.3. IfR has an increasing(decreasing) failure rate, IFR(DFR) then it is also a DUGRL(IUGRL).

Proof. We have,

(3.1) Hα0(X;t) = 1

1−α[h(t)−hα(t) exp (−α(1−α)Hα(X;t))]. SinceRis IFR, by (3.1) and Remark1, we have

Hα0(X;t)≤0,

(15)

Generalized Residual Information Measure M.A.K. Baig and Javid Gani Dar

vol. 10, iss. 3, art. 84, 2009

Title Page Contents

JJ II

J I

Page15of 19 Go Back Full Screen

Close

which means thatHα(X;t)is decreasing int, i.e,Ris DUGRL. The proof for IU- GRL is similar.

Theorem 3.4. If a distribution is DUGRL as well as IUGRL for some constant, then it must be exponential.

Proof. Since the random variableX is both DUGRL and IUGRL, then, Hα(X;t) =constant.

Differentiating both sides with respect tot, we get h(t) = constant, which means that the distribution is exponential.

The following lemma which gives the value of the functionHα(X;t)under linear transformation will be used in proving the upcoming theorem.

Lemma 3.5. For any absolutely continuous random variableX, defineZ =ax+b, wherea >0, b≥0are constants, then

Hα(Z;t) = loga α +Hα

X;t−b a

.

Proof. We have,Hα(X;t)from (2.1) andZ =ax+b, therefore, Hα(Z;t) = loga

α +Hα

X;t−b a

, which proves the lemma.

(16)

Generalized Residual Information Measure M.A.K. Baig and Javid Gani Dar

vol. 10, iss. 3, art. 84, 2009

Title Page Contents

JJ II

J I

Page16of 19 Go Back Full Screen

Close

Theorem 3.6. Let X be an absolutely continuous random variable and X ∈ DU GRL(IU GRL). Define Z = aX +b, wherea > 0and b ≥ 0 are constants, thenZ ∈DU GRL(IU GRL).

Proof. SinceX ∈DU GRL(IU GRL), then, Hα0(X;t)≤0, Hα0(X;t)≥0.

By applying Lemma3.5, it follows thatZ ∈ DU GRL(IU GRL), which proves the theorem.

The next theorem gives upper(lower) bounds for the failure rate function.

Theorem 3.7. IfX is DUGRL(IUGRL), then

h(t)≥(≤)(α)α−11 exp (−αHα(X;t)). Proof. IfXis DUGRL, then

Hα0(X;t)≤0 which gives,

(3.2) h(t)≥(α)α−11 exp (−αHα(X;t)). Similarly, ifXis IUGRL, then

(3.3) h(t)≤(α)α−11 exp (−αHα(X;t)).

(17)

Generalized Residual Information Measure M.A.K. Baig and Javid Gani Dar

vol. 10, iss. 3, art. 84, 2009

Title Page Contents

JJ II

J I

Page17of 19 Go Back Full Screen

Close

Corollary 3.8. LetR(t)be a DUGRL(IUGRL), then

R(t)≤(≥) exp

− Z t

0

(α)α−11 exp (−αHα(X;u)du)

for allt≥0.

(18)

Generalized Residual Information Measure M.A.K. Baig and Javid Gani Dar

vol. 10, iss. 3, art. 84, 2009

Title Page Contents

JJ II

J I

Page18of 19 Go Back Full Screen

Close

References

[1] M. ASAIDAND N. EBRAHIMI, Residual entropy and its characterizations in terms of hazard function and mean residual life time function, Statist. Prob.

Lett., 49 (2000), 263–269.

[2] A.M. AWAD, A statistical information measure, Dirasat XIV, 12 (1987), 7–20.

[3] A.D. CRESCENZO AND M. LONGOBARDI, entropy based measure of un- certainty in past life time distributions, J. of Applied Probability, 39 (2002), 434–440.

[4] N. EBRAHIMI, How to measure uncertainty in the life time distributions, Sankhya, 58 (1996), 48–57.

[5] N. EBRAHIMI, Testing whether life time distribution is decreasing uncertainty, J. Statist. Plann. Infer., 64 (1997), 9–19.

[6] N. EBRAHIMI ANDS.KIRMANI ,Some results on ordering of survival func- tion through uncertainty, Statist. Prob. Lett., 29 (1996), 167–176.

[7] N. EBRAHIMI AND F. PELLERY, New partial ordering of survival function based on the notion of uncertainty, Journal of Applied Probabilty, 32 (1995), 202–211.

[8] F. BELZUNCE, J. NAVARROR, J.M. RUIZ AND Y. AGUILA , Some results on residual entropy function, Metrika, 59 (2004), 147–161.

[9] K.R.M. NAIR AND G. RAJESH, Characterization of the probability distribu- tions using the residual entropy function, J. Indian Statist. Assoc., 36 (1998), 157–166.

(19)

Generalized Residual Information Measure M.A.K. Baig and Javid Gani Dar

vol. 10, iss. 3, art. 84, 2009

Title Page Contents

JJ II

J I

Page19of 19 Go Back Full Screen

Close

[10] R.D. GUPTA AND A.K. NANDA, α and β−entropies and relative entropies of distributions, J. of Statistical Theory and Applications, 3 (2002), 177–190.

[11] A. RENYI, On measure of entropy and information, Proc. of the Fourth Berke- ley Symposium on Math. Statist. Prob., University of California Press, Berkely, 1 (1961), 547–561.

[12] P.G. SANKARAN AND R.P GUPTA, Characterization of the life time distri- butions using measure of uncertainty, Calcutta Statistical Association Bulletin, 49 (1999), 154–166.

[13] C.E. SHANNON, A mathematical theory of communication, Bell System Tech- nical J., 27 (1948), 379–423.

Hivatkozások

KAPCSOLÓDÓ DOKUMENTUMOK

Nevertheless, several studies have reported the fate of glyphosate in natural ecosystems related to its persistence, degradation and residual effects on crops, there

In vitro co-culture of exo-miR-155 mimic with primary hepatocytes and Kupffer cells isolated from miR-155 KO mice resulted in detectable levels of miR-155 in both cell types

Pancreatic elastase-1 in stools, a marker of exocrine pancreas function, correlates with both residual b -cell secretion and metabolic control in type 1 diabetic sub-

Figure 5: (a) Detection accuracy with entropy and joint entropy (before and after attack): ES (Entropy of stego image), ESH (Entropy of stego image after histogram attack), JES

The combined analysis of quartz OSL residual doses, luminescence sensitivity, surface lithology, grain size and transportation distance in terms of modern samples

Extropy is a non-equilibrium entropy potential, and it provides a calculable physical measure of the human impact on environment and formulates the physical limits

Spontaneous macroscopic processes in isolated systems always increase the entropy?. The system gets into equilibrium when its entropy reaches its

Just as when the internal calm and stability of the characters virtually results in a mutual decline of energy levels Harry's phone call jolts George out of his tranquility and