• Nem Talált Eredményt

AN INVERSE MARKOV-CHEBYSHEV INEQUALITY

N/A
N/A
Protected

Academic year: 2022

Ossza meg "AN INVERSE MARKOV-CHEBYSHEV INEQUALITY "

Copied!
4
0
0

Teljes szövegt

(1)

· PERIOD!CA POLYTECHNICA SER. CIVIL ENG. VOL. 36, NO. 4, PP. 455-450 (1992)

AN INVERSE MARKOV-CHEBYSHEV INEQUALITY

V. K. ROHATGI* and Gabor J. SZEKELyl

"Department of Mathematics and Statistics, BGSU, USA Department of Mathematics,

Faculty of Civil Engineering Technical University, H-1521 Budapest

e-mail: H3361SZE@ELLA.HU Received: December 1, 1992

Suppose that X is an arbitrary non negative random variable with three given moments E(X). E(X2) and Lower bounds will be given for the tail probabilities

PU::

> al·

Keywords: : .\larkov-Chebyshev inequality, moment problem.

One of the most classical and most investigated class of probability in- equalities gives bounds on the expected value EJ(X) given Egi(X)

=

Ci,

i

=

1,2, ... ,m where X is a real valued random variable, Ci E R,

f

and gi E

R -+ R such that the expectations above exist. In case gi (x) xi this is a modification of the moment problem (see SHOHAT and T.A.IIIARKIN,(1943) Ch.HI). The best known special case for X ~ 0 is P(X ~ a) ::; Eg(X)/g(a}

(m

=

1, f(x) = Ia.(x), the indicator function of [a, 00)) which is Ivfarkov's inequality for g( x) = x and Chebyshev's inequality for g( x) = x2These in- equalities can be found in most introductory texts (for more information on their history and recent advances see the References). On the other hand it is hard to find in the literature lower bounds on P(X ~ a). LOEVE (1977, p. 159) gives one for bounded random variables: if P(O

S

X

S

c)

=

1 then

P(X ~ a) ~ [Eg(X) - g(a}J/c. If only a

>

E(X)

>

0 is given then clearly the best lower bound is trivial (=0). The same holds if both a

>

E(X)

>

0 and E(X2) (or VarX) are given. In case a

<

E(X) = 1 FELLER (1966, p.

152) provides the inequality P(X

>

a)

S

(1-a )2/ E(X2). To get nontrivial bounds in general case suppose that E(X), E(X2) and E(X3) are given and we seek a Markov-Chebyshev type lower bound in the form

I Research supported by Hungarian National Foundation for Scientific Research.

Grant Ni<. 1.J0.j. 190.5

(2)

4.56 V. k'. ROHATGI and G. J. SZEI,ELY

THEOREM. If X ~ 0 and E(X3) is finite then

P(X

>

a) ~

(20: - 30:2)E(X)ja

+

(30:2 - 1)E(X2)ja'2

+

(1 - 20:)E(X3)ja3 (2) sup a> 1 0: 2( 1 - 0: -)?

and if there exists a random variable X supported on {O, a, b} for some

b

>

a with prescribed first, second and third moments then for this random

variable X (2) is an equality (thus in many cases (2) cannot be improved).

RE:VIARKS. Observe that on the right-hand side of (2) the sum of the coef- ficients is O.

The explicit value of the best 0: is not simple, it is a solution of a cubic equation, and depends on E(Xt i = 1,2,3 and a. However, we need not use the best value. E.g. if we choose 0:

=

2 we get a simple nontrivial (but not necessarily best) bound

THE PROOF OF THE THEOREM If (1) holds for all X ~ 0 with finite E(X3) then it surely holds fOT all random variables degenerate at x ~ O. Thus for the indicator function Ia(x) of [a, (0) we have

(4)

Since p(x) ::; 0 on (0, ,bounded from above on (a,co), there exists an

Xo

<

0 such that p(xo)

=

O. p(x)

<

0 on [a, then (4) does not

give any nontrivial bound. Therefore there must exist an Xl ~ a such that p(Xl)

=

O. Denote by b E (a, co) the unique number where p(x) takes its maximum in (a, co). To get the best possible lower bounds we may suppose that p(b) = 1. For simplicity put Xo = 0 and Xl = a. Then we have the following conditions

p(O) = p(a) = 0, p(b) = 1, p'(b) = 0.

Using the notation 0: = bja we get

(3)

· AN IN FER SE }.fARKOF-CHEBI·SHEF INEQ(:ALITY ·1.5,

therefore Ia(X) ~ p~(X) for every 0::

>

1. Taking expectations on both sides and then supremum for 0::

>

1 on the right hand side we get (2). We get equality in (2) if X is supported on {O, a, b}.

REMARK. The restriction X ~ 0 is essential. If X may take any x E R then (4) cannot hold since on R the right hand side of (4) is not bounded.

Therefore noniriviallower bounds on P(IXI

>

a) require at least four mo- ments E(Xti = 1,2,3,4.

1. X is a random variable having the same m o m e n i s i = 1,2,3 as the uniform random variable on (0,1) then E(Xi) = (i

+

1)-1 and thus (3) gives P(X

>

1/2) ~ 1/6. This bound is shaTp and is achieved for the random variable P(X 0)

=

1/6, P(X

=

1/2)

=

2/3 and

P(X

=

1/6)

=

1/6. Loeve's bound with g(x)

=

x3 and c

=

1 is 1/8.

2. If X is a random variable having the same moments as the exponential distribution with mean 1/)", then EX2

=

2/)..2 and EX3

=

6/)..3.

Thus the lower bound (2) for P(X

~

c\) is 1/12 for c = 1, 27/120 for c

=

2, and 125/688 for c

=

3. For the exponentially distributed X, the corresponding exact p1'Obabilities aTe e-I .3679, e-I/2

=

.6065,

and e-I/3 = .7165. Loeve's inequality does not cover this case.

References

ClIEBYSHEV, P. L. (1866): On !llean Values (in Russian). Mal. Sbomik Vo!. 2. pp. 1- 9 (pu blished simultaneously in Liouville's J. M <Lth. PUTes Appl. (2) Vo!. 12. pp.

177-184 (186/)).

DIIARMADIIIKARI, S. vV. - JOAG-DEv, K. (1983): The Gauss-Tcheb~'shev Ineqtlalit~· for U nimodal Distributions. TeoT. VCTOjutn.i P7-i71Lcn .. pp. 817-820.

FELLER, W. (1966): An Introduction to Probability Thpory and Its Application. Wiley.

New York.

HEYDE, C. C. - SENETA. E. (1977): BienaYllle: Statistical Theory Anticipatpd. Springer.

New York.

](ARLIN, S. SIIAPLEY, L. S. (19.53): Geometry of !llotllPnt Spaces. fidem. Amc7·. Math.

Soc. No. 12.

I":REIN, M. G. (19.51): The Idea.s of P. L. Chebyshf'\, and A. A . .\Iarkov in the Theory of Limit.ing Va.lues of Integrals and their Further Dpwlopmcnt. Uspchi Math. Na1Lk.

(N.S.) Vo!. -14, pp. 3-120.

LO~;VE, M. (19,/): Probability Theory (4th edition). Springer, Npw York.

MARKOV, A. A. (191:3): The Calculus of Probabilitios (in Russian). !lloscow, Co~izdat.

(4)

458 V. K. ROHATGI and G. J. SZEJ.:ELY

ROYDEN, H. L. (1953): Bounds on a Distribution Function when its First n Moments are Given, Ann. Math. Statist. Vo!. 24, pp. 361-376.

SHOHAT, J. - TAMARKIN, J. (1943): The Problem of l\ioments, AmeT'. Math. Soc.

ULIN, B. (1953): An Extremal Problem in Mathematical Statistics, Scand. AktuaT. Tid- skr., pp. 1.58-167.

WALD, A. (1938): Generalization of the Inequality of Markoff, Ann. Malh. Slat. Vo!. 9, pp. 244-255.

WALD, A. (1939): Limits of Distribution Function Determined by Absolute Moments and Inequalities Satisfied by Absolute Moments, Tmns. A l1W·. Math. Soc. Vo!. 46, pp.

280-306.

Hivatkozások

KAPCSOLÓDÓ DOKUMENTUMOK

In some particular cases we give closed form values of the sums and then determine upper and lower bounds in terms of the given parameters.. The following theorem

We first apply the Cuadras identity to relax the monotonicity assumption of β(x) for a single random variable in the Chebyshev inequality, as shown in the following theorem:..

We first apply the Cuadras identity to relax the monotonicity assumption of β(x) for a single random variable in the Chebyshev inequality, as shown in the following theorem:..

One can find in [2] a more detailed definition of W as a complex variable function, some historical background and various applications of it in Mathematics and Physics.. The series

Abstract: In this paper, by the Chebyshev-type inequalities we define three mappings, in- vestigate their main properties, give some refinements for Chebyshev-type in-

In this paper, by the Chebyshev-type inequalities we define three mappings, inves- tigate their main properties, give some refinements for Chebyshev-type inequalities, obtain

Sándor and it helps us to find some lower and upper bounds of the form Ψ(x)−c x for the function π(x) and using these bounds, we show that Ψ(p n ) ∼ log n, when n → ∞

Sándor and it helps us to find some lower and upper bounds of the form Ψ(x)−c x for the function π(x) and using these bounds, we show that Ψ(p n ) ∼ log n, when n → ∞