• Nem Talált Eredményt

INEQUALITIES ON THE VARIANCES OF CONVEX FUNCTIONS OF RANDOM VARIABLES

N/A
N/A
Protected

Academic year: 2022

Ossza meg "INEQUALITIES ON THE VARIANCES OF CONVEX FUNCTIONS OF RANDOM VARIABLES"

Copied!
12
0
0

Teljes szövegt

(1)

Convex Functions of Random Variables

Chuen-Teck See and Jeremy Chen vol. 9, iss. 3, art. 80, 2008

Title Page

Contents

JJ II

J I

Page1of 12 Go Back Full Screen

Close

INEQUALITIES ON THE VARIANCES OF CONVEX FUNCTIONS OF RANDOM VARIABLES

CHUEN-TECK SEE AND JEREMY CHEN

National University of Singapore 1 Business Link Singapore 117592

EMail:see_chuenteck@yahoo.com.sg convexset@gmail.com

Received: 09 October, 2007

Accepted: 31 July, 2008

Communicated by: T. MillsandN.S. Barnett

2000 AMS Sub. Class.: 26D15.

Key words: Convex functions, variance.

Abstract: We develop inequalities relating to the variances of convex decreasing functions of random variables using information on the functions and the distribution func- tions of the random variables.

Acknowledgement: The authors thank the referee for providing valuable suggestions to improve the manuscript.

(2)

Convex Functions of Random Variables

Chuen-Teck See and Jeremy Chen vol. 9, iss. 3, art. 80, 2008

Title Page Contents

JJ II

J I

Page2of 12 Go Back Full Screen

Close

Contents

1 Introduction 3

2 Technical Lemmas 4

3 Main Results 6

4 Applications 11

(3)

Convex Functions of Random Variables

Chuen-Teck See and Jeremy Chen vol. 9, iss. 3, art. 80, 2008

Title Page Contents

JJ II

J I

Page3of 12 Go Back Full Screen

Close

1. Introduction

Inequalities relating to the variances of convex functions of real-valued random vari- ables are developed. Given a random variable, X, we denote its expectation by E[X], its variance byVar [X], and useFX andFX−1 to denote its (cumulative) dis- tribution function and the inverse of its (cumulative) distribution function respec- tively. In this paper, we assume that all random variables are real-valued and non- degenerate.

One familiar and elementary inequality in probability (supposing the expectations exist) is:

(1.1) E[1/X]≥1/E[X]

where X is a non-negative random variable. This may be proven using convexity (as an application of Jensen’s inequality) or by more elementary approaches [2], [4], [5]. More generally, if one considers the expectations of convex functions of random variables, then Jensen’s inequality gives:

(1.2) E[f(X)]≥f(E[X]),

whereXis a random variable andf is convex over the (convex hull of the) range of X(see [6]).

In the literature, there have been few studies on the variance of convex functions of random variables. In this note, we aim to provide some useful inequalities, in par- ticular, for financial applications. Subsequently, we will deal with functions which are continuous, convex and decreasing. Note thatVar [f(X)] = Var [−f(X)]. This means our results also apply to concave increasing functions, which characterize the utility functions of risk-adverse individuals in decision theory.

(4)

Convex Functions of Random Variables

Chuen-Teck See and Jeremy Chen vol. 9, iss. 3, art. 80, 2008

Title Page Contents

JJ II

J I

Page4of 12 Go Back Full Screen

Close

2. Technical Lemmas

Lemma 2.1. LetX be a random variable, and let f, g be continuous functions on R.

Iff is monotonically increasing andg monotonically decreasing, then (2.1) E[f(X)g(X)]≤E[f(X)]E[g(X)].

Iff, gare both monotonically increasing or decreasing, then (2.2) E[f(X)g(X)]≥E[f(X)]E[g(X)].

Moreover, in both cases, if both functions are strictly monotone, the inequality is strict (see [6] or [4]).

Lemma 2.2. For any random variableX, if with probability1,f(X,·)is a differen- tiable, convex decreasing function on[a, b] (a < b)and its derivative ataexists and is bounded, then

∂E[f(X, )] = E ∂

∂f(X, )

. Proof. Letg(x, ) = f(x, ).

For∈[a, b), let

(2.3) mn(x, ) = (n+N)

f

x, + 1 n+N

−f(x, )

,

whereN =db−2 e, and for=b, let (2.4) mn(x, ) = (n+N)

f(x, )−f

x, − 1 n+N

,

(5)

Convex Functions of Random Variables

Chuen-Teck See and Jeremy Chen vol. 9, iss. 3, art. 80, 2008

Title Page Contents

JJ II

J I

Page5of 12 Go Back Full Screen

Close

whereN =db−a2 e.

Clearly the sequence {mn}n≥1 converges point-wise tog. Since with probabil- ity 1, f(X,·) is convex and decreasing, and (by the hypothesis of boundedness)

|mn(X, )| ≤ |g(X, a)| ≤M for all∈[a, b].

By Lebesgue’s Dominated Convergence Theorem (see, for instance, [1]),

(2.5) E

∂f(X, )

=E[g(X, )] = lim

n→∞E[mn(X, )] = ∂

∂E[f(X, )]

and the proof is complete.

(6)

Convex Functions of Random Variables

Chuen-Teck See and Jeremy Chen vol. 9, iss. 3, art. 80, 2008

Title Page Contents

JJ II

J I

Page6of 12 Go Back Full Screen

Close

3. Main Results

Theorem 3.1. For any random variableX, and functionf such that, with probabil- ity1,

1. f(X,·)meets the requirements of Lemma2.2on[a, b]and is non-negative, 2. f(·, )is decreasing∀∈[a, b], and

3. f(·, )is increasing∀∈[a, b], then for1, 2 ∈[a, b]with1 < 2,

(3.1) Var [f(X, 2)]≤Var [f(X, 1)]

provided the variances exist.

Moreover, if3, 4 ∈[1, 2], such that3 < 4and∀ˆ∈[3, 4],f(·,ˆ)is strictly decreasing and f(·, )

is strictly increasing, the above inequality is strict.

Proof. It suffices to show thatVar [f(X, )]is a decreasing function of. First, note that (with probability1)f(X,·)2 is convex and decreasing since f(X,·) is convex decreasing and non-negative. We note that its derivative atais2f(X, a)f0(X, a)and hencef(X,·)2 meets the requirements of Lemma2.2. Thus, we have

∂Var [f(X, )] = ∂

∂ E

f(X, )2

−(E[f(X, )])2 (3.2)

=E ∂

∂f(X, )2

−2E[f(X, )] ∂

∂E[f(X, )]

=E

2f(X, )∂

∂f(X, )

−2E[f(X, )]E ∂

∂f(X, )

≤0,

(7)

Convex Functions of Random Variables

Chuen-Teck See and Jeremy Chen vol. 9, iss. 3, art. 80, 2008

Title Page Contents

JJ II

J I

Page7of 12 Go Back Full Screen

Close

where the last inequality follows by applying Lemma2.1to the decreasing function f(·, ), and the increasing function f(·, ), proving the initial assertion.

If ∃3, 4 ∈ [1, 2], such that 3 < 4 and ∀ˆ ∈ [3, 4], f(·,ˆ) is strictly de- creasing and f(·, )

is strictly increasing, Lemma 2.1 gives strict inequality.

Integrating the inequality from1to2, we obtain

(3.3) Var [f(X, 2)]<Var [f(X, 1)].

The inequality below on the variance of the reciprocals of shifted random vari- ables follows immediately from Theorem3.1.

Example 3.1. LetXbe a positive random variable, then for allq >0and >0,

(3.4) Var

1 (X+)q

<Var 1

Xq

provided the variances exist. Note that the theorem applies sinceX >0with proba- bility1.

The next result compares the variance of two different convex functions of the same random variable.

Theorem 3.2. LetX be a random variable. If f andg are non-negative, differen- tiable, convex decreasing functions such thatf ≤gandf0 ≥g0over the convex hull of the range ofX, then

(3.5) Var [f(X)]≤Var [g(X)]

provided the variances exist. Moreover, if0 > f0 > g0, then the above inequality is strict.

(8)

Convex Functions of Random Variables

Chuen-Teck See and Jeremy Chen vol. 9, iss. 3, art. 80, 2008

Title Page Contents

JJ II

J I

Page8of 12 Go Back Full Screen

Close

Proof. Consider the functionhwhereh(x, ) =f(x) + (1−)g(x),∈[0,1]. We observe that

1. h(x,·)is non-negative, linear over [0,1] (hence differentiable and convex de- creasing), and meets the requirements of Lemma2.2.

2. h(·, )is a decreasing function∀∈[0,1](since bothf andg are decreasing).

3. ∂x h(x, ) = ∂x (f(x)−g(x)) =f0−g0 ≥0. That is, h(·, )is an increas- ing function.

Therefore, by Theorem3.1,

(3.6) Var [f(X)] = Var [h(X,1)]≤Var [h(X,0)] = Var [g(X)].

Furthermore, if0> f0 > g0,h(·,ˆ)is strictly decreasing and h(·, )| is strictly increasing∀ˆ ∈ [0,1]. The result then holds with strict inequality by Theorem 3.1.

Given a random variable X, the inverse of its distribution function FX−1 is well defined except on a set of measure zero since the set of points of discontinuity of an increasing function is countable (see [3]). Given a uniform random variableU on [0,1],Xhas the same distribution function asFX−1(U).

We now present an inequality comparing the variance of a convex function of two different random variables.

Theorem 3.3. LetX, Y be non-negative random variables with inverse distribution functionsFX−1, FY−1 respectively. Given a non-negative convex decreasing function g, ifFY−1−FX−1is

1. non-negative and

(9)

Convex Functions of Random Variables

Chuen-Teck See and Jeremy Chen vol. 9, iss. 3, art. 80, 2008

Title Page Contents

JJ II

J I

Page9of 12 Go Back Full Screen

Close

2. monotone decreasing on[0,1], then

(3.7) Var [g(Y)]≤Var [g(X)]

provided the variances exist.

Moreover, ifgis strictly convex and strictly decreasing and either of the following hold almost everywhere:

1. (FY−1)0−(FX−1)0 <0, or

2. FY−1−FX−1 >0andˆ(FY−1)0+ (1−ˆ)(FX−1)0 >0for allˆ ∈ [1, 2] ⊆ [0,1]

with1 < 2,

then the above inequality is strict.

Proof. Consider the functionhwhereh(u, ) =g FX−1(u) +

FY−1(u)−FX−1(u) ∈ [0,1]. Note that g ≥ 0, g0 ≤ 0, g00 ≥ 0sinceg is non-negative convex and de- creasing; and that the inverse distribution function of a non-negative random variable is non-negative. Hence,

1. h(u,·)is non-negative, differentiable, convex and decreasing, and

∂h(u, ) =0

=

FY−1(u)−FX−1(u)

g0 FX−1(u)

exists and is bounded with probability 1, sohmeets the requirements of Lemma 2.2.

2. h(·, )is a decreasing function∀∈[0,1].

(10)

Convex Functions of Random Variables

Chuen-Teck See and Jeremy Chen vol. 9, iss. 3, art. 80, 2008

Title Page Contents

JJ II

J I

Page10of 12 Go Back Full Screen

Close

3. h(·, )is an increasing function since

∂u

∂h(u, )

= ∂

∂u

FY−1(u)−FX−1(u)

g0 FX−1(u) +

FY−1(u)−FX−1(u)

=

(FY−1)0(u)−(FX−1)0(u)

g0 FX−1(u) +

FY−1(u)−FX−1(u) +(FY−1)0(u)

FY−1(u)−FX−1(u)

×g00 FX−1(u) +

FY−1(u)−FX−1(u) (3.8)

+ (1−)(FX−1)0(u)

FY−1(u)−FX−1(u)

×g00 FX−1(u) +

FY−1(u)−FX−1(u)

≥0.

To justify the inequality, consider (3.8), the first term is non-negative due to con- dition (2) andg being a decreasing function (g0 ≤ 0), and the second (resp. third) term is non-negative since by the properties of distribution functions (FY−1)0 ≥ 0 (resp. (FX−1)0 ≥0), condition (1) holds, andgis convex (g00≥0).

Therefore, by Theorem3.1,

(3.9) Var [g(Y)] = Var [h(U,1)]≤Var [h(U,0)] = Var [g(X)].

If the subsidiary conditions for strict inequality are met, sinceg0 <0andg00 > 0, it is then clear that Theorem3.1gives strict inequality.

(11)

Convex Functions of Random Variables

Chuen-Teck See and Jeremy Chen vol. 9, iss. 3, art. 80, 2008

Title Page Contents

JJ II

J I

Page11of 12 Go Back Full Screen

Close

4. Applications

Applications of such inequalities include comparing variances of present worth of financial cash flows under stochastic interest rates. Specifically, the present worth of ydollars inqyears at a interest rate ofXis given by (1+Xy )q (q > 0, X >0). When the interest rate X increases by a positive amount, , it is clear that the expected present worth decreases:

E

y (1 +X+)q

<E

y (1 +X)q

.

Example3.1shows that its variance decreases as well, that is, Var

y (1 +X+)q

<Var

y (1 +X)q

.

In this example, the random variableXrepresents the projected interest rate (which is not known with certainty), while X + represents the interest rate should an increase ofbe envisaged.

(12)

Convex Functions of Random Variables

Chuen-Teck See and Jeremy Chen vol. 9, iss. 3, art. 80, 2008

Title Page Contents

JJ II

J I

Page12of 12 Go Back Full Screen

Close

References

[1] P. BILLINGSELY, Probability and Measure, 3rd Ed., John Wiley and Sons, New York, 1995.

[2] C.L. CHIANG, On the expectation of the reciprocal of a random variable, The American Statistician, 20(4) (1966), p. 28.

[3] K.L. CHUNG, A Course in Probability Theory, 3rd Ed., Academic Press, 2001.

[4] J. GURLAND, An inequality satisfied by the expectation of the reciprocal of a random variable, The American Statistician, 21(2) (1967), 24–25.

[5] S.L. SCLOVE, G. SIMONSANDJ. VAN RYZIN, Further remarks on the expec- tation of the reciprocal of a positive random variable, The American Statistician, 21(4) (1967), 33–34.

[6] J.M. STEELE, The Cauchy-Schwarz Master Class: An Introduction to the Art of Mathematical Inequalities, Cambridge University Press, 2004.

Hivatkozások

KAPCSOLÓDÓ DOKUMENTUMOK

Results concern- ing the convolutions of functions satisfying the above inequalities with univalent, harmonic and convex functions in the unit disc and harmonic functions

WANG, Convex Functions and Their Inequalities, Sichuan University Press, Chengdu, China, 2001. WANG, On a chains of Jensen inequalities for convex

WANG, Convex Functions and Their Inequalities, Sichuan University Press, Chengdu, China, 2001.. WANG, On a chains of Jensen inequalities for convex

Given a random variable, X, we denote its expectation by E [X], its variance by Var [X], and use F X and F X −1 to denote its (cumulative) distribution function and the inverse of

Our goal is to establish all possible inequalities between the above operators in the class of 3–convex functions and to give the error bounds for convex combina- tions of

WANG, Convex Functions and Their Inequalities, Sichuan University Press, Chengdu, China, 2001. WANG, Inequalities of the Rado-Popoviciu type for functions and their

Classical inequalities and convex functions are used to get cyclical inequalities involving the elements of a triangle.. 2000 Mathematics Subject

The inequalities, which Pachpatte has derived from the well known Hadamard’s inequality for convex functions, are improved, obtaining new integral inequalities for products of