Printer Friendly

An analysis of risk in the West Texas intermediate market based on extreme value method.

1. Introduction

Recently, people are paying more and more attention to the risk management of the financial market, especially after the financial crisis of Asia in 1997-1998, and the global economic crisis since 2008. How to measure and manage the market risk is becoming more and more important. Value at Risk has been generally used as the standard measure by financial analysts to quantify the market risk of a portfolio. During financial crisis such as the Asian crisis of 1997-1998, financial institutions incurred extreme losses that could not be estimated correctly with the parametric methods of value at risk, which are based on the normality assumption of asset returns. Therefore, some reassessments are required for the traditional and widely used parametric measures of market risk, such as the variance-covariance approaches.

Alexander (2001) suggested that traders could perform a fast calculation to find the impact of a proposed trade on their value at risk limit by using the variance-covariance method. But the method based on variance-covariance has two serious disadvantages. Firstly it can accommodate only portfolios that are assumed to be linear with respect to risk factors. Secondly, it is successful only when the portfolio of profit and losses has a normal distribution. Gehin (2006) gave that returns in financial markets are not always normally distributed. So the normality assumption is improper.

As noted by Bensalah (2000), investors and risk managers have become more concerned with events occur under extreme market conditions. The extreme value theory would be useful for us to search for a supplemental risk measure that is able to describe the behaviour of losses during market crashes, because it provides a more appropriate distribution which fits extreme events. More in detail, considering extreme movements, Koedijk et al. (1990), Wagner and Marsh (2005) and Iglesias (2012) showed the advantages of modelling fat-tailed distributions for modeling exchange rate changes. Kabundi and Muteba (2011) compared two kinds of the value at risk based on the different scenarios and suggested that the peek over threshold method is more effective in higher quantiles. Allen et al. (2013) discussed tail specific distribution based extreme value theory, calculated the value at risk in different models including the peak over threshold model etc, and demonstrated that the extreme value theory can be successfully applied to financial market return series for predicting static the value at risk and the expected shortfall. Karmakar (2013) concluded that it is useful to estimate tail-related risk measures based on extreme value theory in Indian stock market. This paper uses the generalized Pareto distribution to model the extreme losses exceedances that peak the given threshold and gains the value at risk with the peak over threshold approach in the case that an investor who has long positions in the West Texas Intermediate market both during market crashes (2008.1-2013.12) and before crashes (1999.1-2014.4). In addition it forecasts the value at risk by the traditional method of variance-covariance.

The paper is organized as follows. In Section 2, the models we consider are shortly discussed. The empirical procedure and results about the West Texas Intermediate daily returns with the extreme theory based the peak over threshold model are presented in Section 3. Additionly, the conclusion is drew in Section 4.

2. The models

In this Section, we introduce some conclusions which derive from the extreme value theory and underlie our model. There are two mainly ways estimating the value at risk of a portfolio with the extreme value theory: the first approach is using the block maxima model, and the second approach is using the peek over threshold model based on threshold exceedances. The block maxima model is a traditional method for dealing with the largest observations collected from large samples of identically distributed observations. But the peek over threshold model is more modern and powerful method for all large observations which exceed some high level than the block maxima method based on the extreme value theory, and it is usually considered to be the most useful for practical applications due to their more efficient use of the (often limited) data on extreme outcomes.

In practice, the block maxima method has a major defect that it is very wasteful of data. Therefore, it has been largely superseded in practice by the methods based on threshold exceedances, which we use all data that are extreme in the sense that they exceed a particular designated high level. Now we focus on the specific details of the peak over threshold way and some properties of the models. The main distributional model for exceedances over thresholds is the generalized Pareto distribution whose definition is:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (1)

where [beta] > 0, and x [greater than or equal to] o when [xi] [greater than or equal to] 0 and o [less than or equal to] x [less than or equal to] - [beta]/[xi] when [xi] < 0. The parameters [xi] and [beta] are referred to, respectively, as the shape and scale parameters. The generalized Pareto distribution in the extreme value theory has played an important role in modeling the excess distribution over a high threshold. This concept is defined along with the mean excess function, which will also play an important role in the theory in returns. The following is the definition of excess distribution over threshold u and mean excess function. Let X be random variable with distribution function F, the excess distribution over the threshold u has distribution function

[F.sub.u](x) = P(X - u [less than or equal to] x|X > u) = [F(X + U) - F(u)]/[1 - F(u)] (2)

For o [less than or equal to] x < [x.sub.F] - u, where [x.sub.F] [less than or equal to] [infinity] is the right endpoint of F .The mean excess function of a random variable X with finite mean is defined by

e(u) = E(X -u|X > u) (3)

The excess distribution function [F.sub.u] describes the distribution of the excess loss over the threshold u, given that u is exceeded. In survival analysis the excess distribution function is more commonly known as the residual life distribution function--it expresses the probability that, say, an electrical component which has functioned for u units of time fails in the time period (u,u + x]. The mean excess function is known as the mean residual life function and gives the expected residual lifetime for components with different ages.

In fact, the generalized Pareto distribution is a natural limiting excess distribution for many underlying loss distributions. Then distributions for which the excess distribution converges to the generalized Pareto distribution when the threshold is given are equivalent to those distributions for which normalized maxima converge to a generalized extreme value distribution. Therefore, the shape parameter of the limiting generalized Pareto distribution for the excesses is as same as the shape parameter of the limiting generalized extreme value distribution for the maxima. We assume a loss distribution F whose normalized maxima can converge to a generalized extreme value distribution, hence for some suitably chosen high threshold u, we can model [F.sub.u] by a generalized Pareto distribution, which is formalized with the following assumption.

Assumption l.Let F be a loss distribution with right endpoint [x.sub.F] and assume that [F.sub.u](x) = [G.sub.[xi],p](x), 0 [less than or equal to] x < [x.sub.F] - u, [xi] [member of] R and [beta] > 0 for some high threshold u.

This is clearly ideal for the reason that in practice the excess distribution is generally not exactly generalized Pareto distribution, yet we use the above Assumption 1 to make a number of calculations in the following details.

Now the method of peek over threshold is introduced. Given loss data [X.sub.1], ..., [X.sub.n] from F, let [N.sub.u] be the number of the data exceeding our threshold u, and relabel these data [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] for convenience. For each of these exceedances we calculate the amount [Y.sub.j] = [[??].sub.j] - u of the excess losses. We wish to estimate the parameters of a generalized Pareto distribution model by fitting this distribution to the [N.sub.u] excess losses. There are various ways for fitting the generalized Pareto distribution including probability-weighted moments and maximum likelihood which is commonly used and easy to implement if the excess data can be assumed to be realizations of independent random variables, since the joint density will then be a product of marginal generalized Pareto distribution densities.

This paper uses the method of maximum likelihood to estimate the parameters. Donet the density of the generalized Pareto distribution by [g.sub.[xi],p], then the log-likelihood is easily calculated to be:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (4)

which must be maximized with respect to the parameter constraints that [beta] > 0 and [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] for all j .Solving the maximization problem yields a generalized Pareto distribution model [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] for the excess distribution [F.sub.u].

In the following we describe how the generalized Pareto distribution model for the excess losses is used to estimate the tail of the underlying loss distribution F, which is associated with risk measures. For the necessary theoretical calculations we make Assumption1 again.

Under Assumption 1, we can obtain, for x [greater than or equal to] u,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (5)

Which provides a formula for calculating tail probabilities if F(u) is given. This formula (5) may be inverted to obtain a high quantile of the underlying distribution, which is interpreted as value at risk. For [alpha] [greater than or equal to] F(u), we yields

Va[R.sub.[alpha]] = [q.sub.[alpha]](F) = u + [beta]/[xi] ([(1-[alpha]).sup.-[xi]]/[bar.F](u) - 1) (6)

In practice, we first fit a generalized Pareto distribution to excess losses over a threshold u. We estimate these quantities by replacing [xi] and [beta] in formulas (4)-(6) by their estimations. Finally in order to estimate the value at risk, an estimate of [bar.F](u) is required. Here we take the simple empirical estimator [N.sub.u]/n. So we can gain an estimation of the value at risk in the following:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (7)

We compare the estimation of the value at risk by peek over threshold method with that by traditional variance-covariance method. Following Duffie and Pan (1997) and Alexander and Leigh (1997), the variance-covariance based value at risk at time interval t is computed as:

Va[R.sub.[alpha]] = [Z.sub.[alpha]][[sigma].sub.p] [square root of ([DELTA]t)] (8)

where [Z.sub.[alpha]], [[sigma].sub.p] and [DELTA]t are the standardized normal variable, the portfolio volatility and the time horizon, respectively. We use this formula to generate that estimations of the value at risk based on the variance-covariance method and compare them with the estimation based on the peak over threshold method. Finally, we also demonstrate the efficiency of these two approaches by back testing.

3. Data and empirical results

This paper aims at generating different scenarios of the value at risk estimation during financial crashes. For this purpose we collect daily closing prices of the West Texas Intermediate from 1 January 1999 to 14 April 2014. In order to find the difference of market risk between the two periods before and during the crisis, the sample period is divided into two parts: the pre-crisis period (from 1 January 1999 to 31 December 2007) and the crisis period (from 2 January 2008 to 14 April 2014).

3.1. Description of data

The basic statistics are given in Table 1 and the time series plot of daily negative log-returns of the West Texas Intermediate from the indices during 1999.1-2007.12 and 2008.1-2014.4 in Figure 1.

The kurtosis in Table 1 is greater than three for return series both before crisis and during crisis which suggests that the distributions of return in the West Texas Intermediate market are fat tailed. The value of Jarque-Bera statistic for the series indicates that the returns do not follow a normal distribution.

Figure 1 shows that the time series plots of daily returns of the indices during 1999.1-2007.12 and 2008.1-2014.4. The extreme behaviour of the return series during the sample data period is showed by the above graphs.

3.2. Value at risk-the peak over threshold approach and the variance-covariance approach

We use the maximum likelihood method to obtain point estimations of the shape parameter C and scale parameter [beta] in equation (1). In addition, we consider only the right tail of the generalized extreme value, which represents losses for an investor with long positions in the abovementioned the West Texas Intermediate market. At first, we compute the distribution of maximal losses for the above investors in the above. This distribution is obtained as follows. Denote [p.sub.t] the closing price at time t in the West Texas Intermediate market. Following Payaslioglu (2009), we construct the return as [r.sub.t] = 100 * log([p.sub.t]/[p.sub.t-1]), where t=1, 2, ..., T. Because we study on the loss function in this paper, we let be the negative return sequence, in other word [X.sub.t] = -[r.sub.t], and our discussion and estimation are both based on the sequence ([X.sub.t]).

We apply first our negative returns to estimate the parameters in equation (1), and use [N.sub.u] = 0.1T following Payaslioglu (2009) and forecast the value at risk based on the formula (7). The results are given in Table 2.

The estimation of the generalized Pareto distribution's parameters signal higher fat-tailedness in the distribution of the West Texas Intermediate in the crisis in comparison to that in the non-crisis period. Furthermore, the generalized Pareto distribution estimators for the volatility are higher than that before the financial crisis, which suggests higher volatility during the crisis characterized by higher negative returns. These results highlight the importance of the extreme value theory in managing market risk during crisis.

We now estimate the value at risk using equation (7) which based on the peak over threshold method. Table 3,4 represents 99%, 95% and 90% estimates of both the value at risk by equation (7) and the value at risk that based on the variance-covariance method. The results suggest that the value at risk is higher during crisis than that before crisis with [alpha] = 0.99,0.975,0.95. In the meantime, we can find that the estimations of the value at risk based on the peak over threshold method is higher than that based on the variance-covariance method when with [alpha] = 0.99 and [alpha] = 0.975. The possible reason is that the variance-covariance method does not relate to the fat-tailedness behavior of returns during market crises. But if [alpha] = 0.95, the conclusion is opposite that the value at risk based on the variance-covariance method is higher than that obtained by the peak over threshold method. These results are consistent with Gencay and Selcuk (2004), who argued that the value at risk estimates based on EVT are reliable and more accurate at higher quantiles.

3.3. Backtesting

One approach used to evaluate the performance of the value at risk techniques is the two sided binomial test. The theory we used to proceed our back testing is that the expected number of breaches m which the actual loss exceeds the forecasted the value at risk is n(1 - [alpha]) if the value at risk model is actual. In other word, if the model is effective, the number of exceedances (which is denoted by m) follows a binomial distribution whose expectation is n(1 - [alpha]) and variance is n[alpha](t - [alpha]). We use the test statistic

z = m - n (1 - [alpha])/[square root of (n[alpha])(1 - [alpha])] (9)

Whose approximation distribution is normal distribution, On the one hand, if the number of exceedances is much larger than n(1 - [alpha]), the model is considered to underestimate risk. On the other hand, if the number of breaches is too smaller than expected violations, the model is viewed to overestimate the risk. The values of the test statistic z and the numbers of breaches for the two sided binomial test are given in Table 5 and Table 6.

From Table 5 and Table 6, we can find that the method based on the peak over threshold model is accepted both before crisis and during crisis at every confidence level ( [alpha] = 0.99,0.975,0.95). From another point of view, the method based on traditional variance-covariance with the assumption of normal distribution is rejected in two cases at confidence level [alpha] = 0.99. When [alpha] = 0.975, the results based on the extreme theory are superior to the results based on traditional variance-covariance approach in any case. In addition, the traditional approach based breaches is closer to expected violations in the non-crisis period only when a = 0.95 . The above mentioned conclusion is consistent with the result proposed by Kabundi and Muteba(2011) that the peek over threshold method is more effective in higher quantiles.

3.4. Discussion of results

By the empirical results of 3.1, 3.2 and 3.3, we can conclude that the daily log-returns distribution of the West Texas Intermediate market is characterized by heavy tails presented by excess kurtosis and is not captured by the normal distribution, but it can be derived from extreme movements. The peak over threshold estimators and variance-covariance estimators are signal higher fat-taildeness in the distribution of the daily log-returns in West Texas Intermediate market during the crisis in comparison to the non-crisis period at confidence level 0.99, 0.975 and 0.99 . The above mentioned results mean that volatility during the crisis period is higher that before crisis and it suggests that people need pay more attention to risk management during crisis. Furthermore, by comparing the efficiency of the peak over threshold approach with that of variance-covariance approach, the importance of using extreme value theory in modelling extreme market returns especially during crisis is showed. From another aspect, a closer look at Table 5 and Table 6 reveals that the peak over threshold model based estimator of the value at risk is more acceptable not only in the crisis period but also in the non-crisis period at higher confidence level such as 0.99.

4. Conclusion

A sound framework for modeling extremely events is provided by the extreme value theory, which can be used to quantify extreme risk situations. In this study, we focus on the extremes of market risk in oil market and use extreme value theory to model such extreme events. The data set we used is daily log-return in West Texas Intermediate market observed from 1 January 1999 to 16 April 2014. We divided this sample period into two parts: the non-crisis (1999.1-2007.12) and the crisis period (2008.1-2014.4).

The value at risk based on the peak over threshold model is a good alternative risk measure, especially in recession, mainly because it slacks the strong assumption of normally distributed returns and hence exploits the features of fat-tiredness and asymmetry observed from daily log-returns in West Texas Intermediate market. We compare the estimates of value at risk by the peak over threshold model with that by the traditional variance-covariance method. During downturns, the peak over threshold model based value at risk estimates is higher relative to that by the traditional variance-covariance approach. In addition, the method based on the peak over threshold model is accepted both before crisis and during crisis at every level ([alpha] = 0.99,0.975,0.95). But the method based on the traditional variance-covariance with the assumption of normal distribution is rejected in two cases at [alpha] = 0.99.

Through our empirical study, the advantages of the peak over threshold model are verified again in estimating and managing market risk, especially in the crisis period. The conclusion of the study can give some suggestions to financial institutions and managers. From the above discussion, we can say that the peak over threshold method in estimating the value at risk of daily log-returns serious in the West Texas Intermediate market is effective both during crisis and before crisis at higher quintiles, because the tail of loss distribution is taken more attention when we use this method.

Acknowledgments

The paper is supported by the National Social Science Foundation of China (14BZZ063). References

Alexander, C., Leigh, C. T. (1997). On the covariance matrices used in value at risk models. The Journal of Derivatives, 4(3), 50-62.

Alexander, C. (2001). Market models: a guide to financial data analysis. John Wiley & Sons, 12-15.

Allen, D. E., Singh, A. K., Powell, R. J. (2013). EVT and tail-risk modelling: Evidence from market indices and volatility series. The North American Journal of Economics and Finance, 26, 355-369.

Bensalah, Y. (2000). Steps in applying extreme value theory to finance: a review. Bank of Canada, 45-50.

Duffie, D., & Pan, J. (1997). An overview of value at risk. The Journal of derivatives, 4(3), 7-49.

Gencay, R., & Selcuk, F. (2004). Extreme value theory and value-at-risk: relative performance in emerging markets. International Journal of Forecasting, 20(2), 287-303.

Gehin, W. (2006). The challenge of hedge fund performance measurement: a toolbox rather than a pandora's box. EDHECpublication, 23-30.

Iglesias, E. M. (2012). An analysis of extreme movements of exchange rates of the main currencies traded in the Foreign Exchange market. Applied Economics, 44(35),

4631-4637.

Kabundi, A., Muteba, J. M. (2011). Extreme Value at Risk: A Scenario for Risk Management. South African Journal of Economics, 79(2), 173-183.

Karmakar, M. (2013). Estimation of tail-related risk measures in the Indian stock market: An extreme value approach. Review of Financial Economics, 22(3), 79-85.

Koedijk, K. G., Schafgans, M., & De Vries, C. G. (1990). The tail index of exchange rate returns. Journal of international economics, 29(1), 93-108.

Payaslioglu, C. (2009). A tail index tour across foreign exchange rate regimes in Turkey. Applied Economics, 41(3), 381-397.

Pereira, C., Ferreira, C. (2015). Identification of IT Value Management Practices and Resources in COBIT 5. RISTI - Revista Iberica de Sistemas e Tecnologias de Informacao, (15), 17-33.

Wagner, N., Marsh, T. A. (2005). Measuring tail thickness under GARCH and an application to extreme exchange rate changes. Journal of Empirical Finance, 12(1), 165-185.

Singh, A., Allen, D. (2013). Extreme market risk and extreme value theory. Mathematics and computers in simulation, 94, 310-328.

Yi, Y., Feng, X., Huang, Z. (2014). Estimation of Extreme Value-at-Risk: An EVT Approach for Quantile GARCH Model. Economics Letters, 124, 378-381.

Xueyan Pan (1,2),*, Guanghui Cai (1)

* Xueyan Pan, xueyanpan@sina.com

(1) School of Statistic and Mathematics, Zhejiang Gongshang University, 31008, Hangzhou, China

(2) School of Mathematics and Computer Science, Anhui Normal University, 241000, Wuhu, China

Table 1--Summary statistics

               Before crisis   During crisis

mean           -0.03986795     -0.001188147
standard       1.042079        1.087631
skew           0.5385107       -0.1151543
kurt           3-73172         5.907043
max            7.422868169     5.570574
JB statistic   1419.88         2313.78

Table 2--Estimation of parameters of GPD

         Before crisis   During crisis

[xi]     0.1883935       0.09592439
         (0.07412521)    (0.1033087)
[beta]   0.5699422       0.80526363
         (0.05636310)    (0.1048012)
u        1.185787        1.100841

Table 3--Value at risk estimates before crisis

                  VaR(POT)   VaR(V-C)

[alpha] = 0.99    2.828808   2.388176
[alpha] = 0.975   2.088671   2.002607
[alpha] = 0.95    1.607795   1.679562

Table 4--Value at risk estimates during crisis

                  VaR(POT)   VaR(V-C)

[alpha] = 0.99    3.175747   2.532992
[alpha] = 0.975   2.294801   2.130568
[alpha] = 0.95    1.677982   1.793403

Table 5--Results-backtesting (before crisis)

                      [alpha]=0.99   [alpha]=0.975   [alpha]=0.95

Expected violations   23             56              113
Violations in TOP     22             48              110
  model
Violations in the     33             56              99
  V-C model
Value of z in the     -0.1058932     -1.120267       -0.2513867
  POT model
Value of z in the     2.223758       -0.040404915    -1.314946
  V-C model

Table 6--Results-backtesting (during crisis)

                      [alpha]=0.99   [alpha]=0.975   [alpha]=0.95

Expected violations   16             40              79
Violations in TOP     18             43              80
  model
Violations in the     34             46              63
  V-C model
Value of z in the     0.555731       0.482957        0.09802383
  POT model
Value of z in the     4.54689        0.965914        0.965914
  V-C model
COPYRIGHT 2016 AISTI (Iberian Association for Information Systems and Technologies)
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2016 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Pan, Xueyan; Cai, Guanghui
Publication:RISTI (Revista Iberica de Sistemas e Tecnologias de Informacao)
Date:Nov 1, 2016
Words:4068
Previous Article:Analysis of the service mode of university library based on cloud computing information technology.
Next Article:Research on the relationship between strategic HRM and employee knowledge sharing from the perspective of social network.

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters