• Nem Talált Eredményt

Entropy as risk measure

III ENTROPY-BASED ASSET PRICING

III.3 Methodology

III.3.5 Entropy as risk measure

     

 

/2

2 /

2 2

/

2

1 / m i

m i j

i j m i

m i j

i j n

x x

i j x x x n

f , (III.16)

if i:x[xn,i,xn,i1);

/2

2

1 /

1 i m

m i j

j

i x

x m , and 1 jn. The parameter for sample spacing methods is the fixed order m. For practical reasons (e.g. various size of samples), we suggest using mn, which depends on the size of the sample and is calculated by the following formula:

n

m n g

   

 , (III.17)

where g is the number of bins, and the braces indicate the ceiling function.

Beirlant et al. (1997) overview several additional entropy estimation methods, such as re-substitution, splitting-data and cross-validation; however, our dissertation focuses on the most often used applications. In Subsection III.4.1, we have discussed which entropy estimation is applied for further investigation.

their random variable of return RA3RA1RA2. We consider similar notation for weighted sums and addition of constant c. In this subsection, we investigate the properties of entropy-based risk H.

III.3.5.1 Definition

Based on our approach the uncertainty (and entropy) of the return of a security can be interpreted as risk. The more even distribution (or higher variance) of the probability density function results higher entropy; on the other hand, the higher probability of outcomes of returns means lower entropy (thus lower risk) of returns. As the returns are real values, we apply differential entropy on their observation. Let us recall that the differential entropy can take negative values as well; furthermore, in Subsection III.3.5.4 we show, that H

 

X does not satisfy the axiom of positive homogeneity. For better properties of risk function, we transform the differential entropy with exponential function and we define the continuous random variable-based risk measure H as

 

H R A RF.

H A e

  (III.18)

Based on the properties of exponential function H takes values from the non-negative real numbers, H 0. If the probability density function (pdf) of RARF is Dirac delta (in other words, there is only one possible outcome in the distribution function), H is zero and if the pdf is flattened to the entire real axis, H converges to positive infinity.

III.3.5.2 Shannon entropy and standard deviation

We show that the Shannon entropy-based risk measure differs from standard deviation in constant if the distribution of the risk premium of asset A is normal. Let X ~N

 , 2

be

a normally distributed continuous random variable with expected value  and variance 2. Based on Norwich (1993), the expected Shannon entropy on random variable X is

  

2

1

1ln 2 .

H X  2  e (III.19)

Assuming that the risk premium RARF of asset A is normally distributed, the following equation can be formed based on Eq. (III.18) and Eq. (III.19):

 

12ln 2

A2

2

 

2 ,

H A

A e  e e A e

      (III.20)

where 

 

A is the standard deviation-based risk of asset A. By the left and right side of Eq.

(III.20) we can conclude that the Shannon entropy-based risk measure differs from standard deviation in a constant 2e, if the risk premium is normally distributed.

III.3.5.3 Coherent risk

Based on Artzner et al. (1999) a risk measure is considered as a coherent measure if it satisfies the axioms of translation invariance, subadditivity, positive homogeneity and monotonicity.

Furthermore, we are also interested in the convexity of the risk measure, because it is necessary to show if the entropy-based risk is capable to characterize the diversification effect. In the following subsections, we investigate whether the entropy-based risk satisfy these axioms. We show that entropy-based risk measure satisfies the axiom of homogeneity in any circumstances.

We also prove that Shannon entropy-based risk satisfies the convexity and subadditivity assuming normal distributions; however, we refer to test these axioms under non-normal distributions empirically for both Shannon- and Rényi entropies. We show that entropy-based risk measure does not satisfy the axiom of translation invariance and monotonicity; therefore, we conclude that entropy-based risk measure is not coherent. However, we emphasize that the coherence is not required for asset pricing (neither standard deviation, nor VaR is coherent).

Let AX and AZ be two assets, where the random variable of risk premium of AX and AZ denoted by XRXRF and ZRZRF with the probability function fX

 

x and fZ

 

z , respectively. In the following subsections, we define the relation between AX and AZ, thus between X and Z, according to the axioms.

III.3.5.4 Positive homogeneity

The risk measure  is positive homogeneous if  

 

A 

 

A for all 0. It states that if the random variable of the return of A by  then the risk itself is also multiplied by that coefficient. If AZcAX and ZcX and constant c is non-negative, we show that

   

H AZ c H AX

   . Let us recall the generalized differential entropy for variable Z that is defined in the following equation:

 

1 ln

 

.

1 Z

H Z f z dz

 

(III.21)

Using the equality of probability density functions Z

 

cX

 

1 X

f z f z f z

c c

    

  assuming c0 and dzcdx, we shape the equation to

 

1 ln 1 .

1 X

H Z f z cdx

c c

  

 

    (III.22)

Assuming c0 we omit the absolute operator and form the equation to

 

1 ln 1 1

 

.

1 X

H Z f x dx

c

 

 

   (III.23)

Separating the constant values from the integral, we get the following equation:

 

1 ln

 

1 ln 1 1.

1 X 1

H Z f x dx

c

 

 

 

     (III.24)

Substituting

 

1 ln

 

1 X

H X f x dx

 

, we reach the final formula

   

1 ln 1

 

ln ,

H Z H X 1 c H X c

   

 (III.25)

which shows that H

 

Z differs from H

 

X in an additive constant lnc.

For Shannon entropy – as a special case of generalized differential entropy function using  1 – we deduce the same relationship. The equation of differential Shannon entropy function for variable Z is

     

1 Z ln Z .

H Z  

f z f z dz (III.26)

Using Z

 

1 X

f z f z

c c

     assuming c0 and dzcdx, we shape the equation to

1

 

1 1

X ln X

z z

H Z f f cdx

c c c c

 

   

      

    

(III.27)

Using c0, we omit the absolute operator and form the equation to

       

1 X ln X ln

H Z  

f x f xc dx (III.28)

By separating fX

 

x and constant lnc, we get

       

1 X ln X X ln .

H Z  

f x f x dx

f x cdx (III.29)

Substituting H1

 

X  

fX

 

x ln fX

 

x dx, we reach the final formula

       

1 1 ln X 1 ln ,

H ZH Xc f

x dx HXc (III.30) which also shows that H Z1

 

differs from H1

 

X in an additive constant lnc.

Applying exponential transformation on entropy function introduced in Subsection III.3.5.1, we show that

 

    ln  

 

H Z H X

H Z H X c H X

A e e ce c A

      if c0. (III.31)

We also derive, that H

 

AZcH

 

AX if c0.As cH

 

AX0 if c0, we need to show that

   

0

H AZ H cAX

   if c0. (III.32)

The probability function fZ of Z 0X is Dirac delta, because Z always takes zero. As showed that H

 

AZ 0 if the probability function of underlying random variable of AZ is Dirac delta, Eq. (III.32) is valid. Based on our deduction, we have shown that H

cAX

cH

 

AX if

0;

c therefore, the entropy-based risk measure satisfies the axiom of positive homogeneity.

III.3.5.5 Subadditivity and convexity

The risk measure  is subadditive if 

AXAZ



 

AX 

 

AZ for all AX and AZ. It states that the risk of the sum of two assets is equal or less than the sum of the individual risks of asset AX and AZ. The axiom of convexity states that risk measure  is convex if

 

AX 1 AZ

   

AX 1

  

AZ

        for all AX and AZ and 0  1. It states that the risk of the portfolio created by the -weighted linear combination of asset AX and AZ is equal or less than the weighted sum of the risk of AX and AZ. Assuming that  is positive

homogeneous, we can easily deduce, that if  is convex then it is also subadditive. If  is convex then the axiom applying 1

 2 is

   

1 1 1 1

2AX 2AZ 2 AX 2 AZ

     

  (III.33)

Using the axiom of positive homogeneity 1 1 1

 

2AX 2AZ 2 AX AZ

     , the equation is

     

1 1 1

2 AXAZ  2 AX 2 AZ (III.34)

Simplifying by 1

2, we see that  is subadditive in this case.

We have shown that entropy based risk measure satisfies the positive homogeneity.

Based on that, if we validate the convexity we also validate the subadditivity; therefore, we focus on investigating the convexity. Based on Markowitz’s Modern Portfolio Theory (1952) the standard deviation is convex:

 

AX 1 AZ

   

AX 1

  

AZ

        (III.35)

where  is the standard deviation based risk. Multiplying the equation by 2e we get the following:

 

       

2  e AX  1  AZ  2 e AX  2e 1  AZ (III.36) As Shannon entropy-based risk measure differs from standard deviation in constant coefficient

   

2

H A A e

   if the distribution of the risk premium of A is normal, we see that

 

1

  

1

  

.

H AX AZ H AX H AZ

        (III.37)

Based on that, we deduce that Shannon entropy-based risk satisfies convexity and subadditivity if the distribution of AX, AZ and the joint probability distribution of AX and AZ is normal. For Rényi entropy on normal distribution and for non-normally distributed random variables, we cannot have such a consideration; however, our hypothesis is that entropy-based risk measures satisfies convexity even if the risk premiums are not normally distributed. Although the analytical proof of that is a difficult mathematical problem, we do not try to derive in this dissertation; however, we try to validate it empirically in Subsection III.4.3.

III.3.5.6 Translation invariance

The risk measure is translation invariant if

A C

 

A c, where C is a risk free asset with constant return of c. It states that if we increase (or decrease) the return of asset A by fixed risk-free return, the risk decreases (or increases) by this constant. Let us define AZAXC and ZXc. Let us recall the generalized differential entropy for variable Z that is defined in the following equation:

 

1 ln

 

.

1 Z

H Z f z dz

 

(III.38)

Using dzdxand fZ

 

zfZ

xc

fX c

xc

fX

 

x if ZXc for all real value c, we see that

 

1 ln

   

.

1 X

H Z f x dx H X

  

(III.39)

It can be easily shown that H Z1

 

H1

 

X also holds for Shannon entropy of Z, as a special case of generalized entropy function. Based on the Eq. (III.39), we show that

   

,

H A C H A

   (III.40)

which does not satisfy the axiom of translational invariance.

III.3.5.7 Monotonicity

The risk measure  is monotone if 

 

AZ 

 

AX if AXAZ for all AX and AZ. We interpret this axiom as if the return of AZ is equals or greater than AX then the risk of AZ cannot be greater than the risk of AX. Based on the contradiction of translational invariance and confirmation of positive homogeneity, it can be easily found a counterexample to show that entropy-based risk does not satisfy the axiom of monotonicity. Assume that ZcXX 0, where c1. In this case, Z is always better than X. Although E Z

 

E X

 

, the entropy-based risk measure is independent from the expected value. In the previous subsection, we have shown, that H

 

cXcH

 

X if c0; therefore, we show an example, where ZX and

   

,

H Z H X

  which does not satisfy the axiom of monotonicity.