# A GARCH-VaR Investigation on the Brazilian Sectoral Stock Indices/Uma Investigacao GARCH-VaR sobre os Indices de Acoes Setoriais Brasileiros.

1. IntroductionThe advances in theory and application involving the risk management in stock markets have received significant attention from academics, practitioners (e.g, Webby et al. (2007), Bi and Giles (2009), Candelon et al. (2010), Gaglianone et al. (2011), Asfaha et al. (2014)) and regulators agencies (on Banking Supervision, 2009, 2013). In Brazil, for example, there is a set of central roles in several financial applications, including pricing and asset allocation, among others (see Banco Central do Brasil (2009)). One widely used measure to quantify and regulate risk is the Value-at-risk (VaR) which is an indicator of the extremes within which the stock trading returns of an investment in a specific asset or portfolio fluctuates (Giot and Laurent, 2004). In other words, the VaR represents a potential extreme capital loss incurred under a small probability of occurrence. The investment loss is expected to exceed the VaR with a small probability. VaR can be seen as a natural approach to understand volatility given that it directly depends on the forecast of the conditional standard deviation.

The literature has been shown that the family of GARCH models (see e.g., Francq and Zakoian (2011), Bollerslev et al. (1994), Ghysels et al. (1996), Engle and Patton (2001), Hansen and Lunde (2005)) provides much empirical regularity associated with the volatility of returns on financial assets such as leptokurtic distributions, volatility clustering, leverage effects, persistence and asymmetric volatility, among others. Using the conditional mean and the conditional variance obtained from the estimated GARCH family models, the quantiles of the conditional distribution can be easily obtained for the calculation of VaR (Tsay, 2005).

This paper proposes to use GARCH approach to evaluate the most important Brazilian sectoral stock indices in a Value at Risk point of view. On this way we estimated the [VaR.sub.99%] (1% VaR) and [VaR.sub.1%] (99% VaR) by using the most common GARCH specifications, namely, the Normal-GARCH (NGARCH), the tGARCH, and the eGARCH (Nelson, 1991a) approaches. In this sense, although new strategies to estimate VaR have been being proposed (e.g., Chernozhukov (2005), Paolella and Polak (2015)), we understand that the usual GARCH specifications are enough to generate confident estimates focusing on an easy implementation, which does not enforce an exhaustive computational effort. In our study, we highlighted the VaR measure as a risk indicator able to help investors on making a portfolio selection on the Brazilian stock market.

Beyond this introduction, the paper is organized as follows. Section 2 provides a theoretical background of the used methods. Section 3 presents a brief motivation of choosing our modeling strategy and the results of our VaR analysis on the Brazilian sectoral stock indices. Finally, in Section 4 we present the conclusions of the paper. 2 2

2. Theoretical Background

* Volatility measure: Volatility can be informally defined as a measure of fluctuation in a given stochastic processes, as for example, the time series of an asset prices (Brockwell and Davis, 2002). The most simple method to estimate volatility is by evaluating the standard deviation of a sequence of stock returns (called as Historical Volatility). Historical Volatility is efficient only when the returns are normally distributed. However, many empirical data indicate that the returns are generally non-normal distributed with presence of clusters, fat tails and nonlinear patterns (Cont, 2001, Engle and Patton, 2001, Danielsson, 2011). Volatility models can provide both conditional and unconditional volatility estimates. Unconditional volatility is simply the sample standard deviation ([??]) over the entire time range. Conditional volatility is the standard deviation estimate for a specific point in time given previous values. The number of previous values used in the model may vary. In this case, it is common to use a parametric statistical modeling to estimate the standard deviation at t-period ([[??].sub.t]). Morgan (1996) introduced the Exponentially Weighted Moving Average models (EWMA) and RiskMetrics[TM] software to evaluate asset returns. EWMA are non-parametric: no input besides the observation window is needed. This makes implementation easy and facilitates comprehension, but limits their power to deal with the challenges imposed by real volatility patterns such as non-normality and clustering (Danielsson, 2011).

The parametric way is more simple to represent this type of series (in our case, the return series). It considers [R.sup.dm.sub.t] (demeaned-return) as the shock or innovation of the asset returns (return residuals, with respect to a mean process). The shocks are split into a stochastic piece [u.sub.t] and a time-dependent standard deviation ([[sigma].sub.t]) which is the process for the volatility such that volatility equation is given by [R.sup.dm.sub.t] = [u.sub.t] = [[sigma].sub.t][z.sub.t]. That is, conditional heteroskedasticity models are centered on the the dynamics of [[sigma].sub.t] with the random variable [z.sub.t] as a strong white noise process (Tsay, 2005). Using the Autoregressive Conditional Heteroskedasticity (ARCH) approach (Engle, 1982), the conditional variance ([[sigma].sup.2.sub.t]) can be modelled by

[[sigma].sup.2.sub.t] = [[gamma].sub.0] + [[gamma].sub.1][([R.sup.dm.sub.t-1]).sup.2] + ... + [[gamma].sub.q][([R.sup.dm.sub.t-q]).sup.2], (1)

where [[gamma].sub.i] > 0, i = 0, ..., q. The Generalized ARCH model (GARCH) was proposed by Bollerslev (1986) as alternative to the equation (1). The modeling strategy in a GARCH(p, q) is given by

[[sigma].sup.2.sub.t] = [[gamma].sub.0] + A([omega])[([R.sup.dm.sub.t]).sup.2] + B([omega])[[sigma].sup.2.sub.t], (2)

where A([omega]) = [[gamma].sub.1][[omega].sup.1] + ... + [[gamma].sub.q][[omega].sup.q] and B([omega]) = [[beta].sub.1][[omega].sup.1] + ... + [[beta].sub.p][[omega].sup.p] are polynomials of order p and q from the autoregressive and the move average components of GARCH(p, q), respectively, and [omega] denotes the lag order operator. In the eGARCH(p, q) (Nelson, 1991a) there is an asymmetric function for the conditional variance depending on the lagged disturbances ([[epsilon].sub.t-i]) according to the equation

[mathematical expression not reproducible], (3)

where E([[epsilon].sub.t]) denotes expected value of [[epsilon].sub.t]. The parameters [[gamma].sub.i] and [[alpha].sub.i] capture, respectively, the sign effect and the size effect of the asymmetric relation between stock returns and volatility. In financial assets (specially in high frequency data), the distribution of returns hardly ever is normal adjusted: extreme returns occur more frequently than expected under normality (fat tails) and extreme negative returns are frequent when comparing with the positive ones (negative asymmetry); there are also the volatility clusters and calendar effects (Schwert, 2003, Francq and Zakoian, 2011) to be considered.

Different variations on GARCH models can be found in literature (Angelidis et al., 2004). However, in many empirical applications the GARCH(1,1) is generally mentioned as that which outperforms others (Hansen and Lunde, 2005). Furthermore, it is computationally convenient and widely used by specialists to model volatility of daily returns. For these reason, in our work, the GARCH (1,1) specification was used to analyze VaR on the Brazilian sectoral stock indices. The fact that GARCH enables the amplitude of the demeaned-returns to depend upon previous values is of paramount importance, since it effectively models volatility clustering (Francq and Zakoian, 2011). Owning to its robust performance, superior estimation, overall simplicity and flexibility.

* Value-at-Risk (VaR) measure and its GARCH estimation: The use of VaR as a risk metric quickly spread during the 1990s. J.P. Morgan made available to the public its estimation software, RiskMetrics[TM], in 1994, which quickly became the industry benchmark. RiskMetrics was developed during the 1980s, starting as an aggregation of hundreds of risk factors and several VaR estimates calculated daily (Holton, 2002). RiskMetrics assigns an exponentially decaying weight ([lambda]) for each observation in the sample, so that, more recent returns have greater weight than old ones. In commercial applications, VaR has been a widely used risk metric. The 1988 Basel Committee signed by G-10 countries imposed risk management on banks, which have motivated the use of some VaR methodology to comply with regulations (Jorion, 2001). Several United States government agencies such as the Federal Reserve and the Security and Exchange Commision (SEC) use or advocate the use of VaR (Khindanova et al., 2000). In the Brazilian context, the Brazilian's Central Bank, requires financial institutions to manage risks based on VaR. Although not mentioned directly, requirements are based on the Basel Committees recommendations (Banco Central do Brasil, 2009). Other banks such as Credit Swiss, Chase Manhattan and Deutsche Bank followed J.P. Morgan. By 1999, over 80 different commercial vendors were offering VaR software (Christoffersen et al., 2001). Albeit used mostly by finance and actuarial-related clients, several studies were made using VaR in many different risk modeling, such as geology (Webby et al., 2007), the US movie industry (Bi and Giles, 2009), and agriculture (Asfaha et al., 2014). Basel regulations allow banks to develop their own methods to calculate VaR, stimulating research and development of new methodologies to understand and estimate this risk measure.

The concept of VaR is very simple: it is just a number representing how much loss may occur with a certain probability (1 - [alpha]) of occurrence of losses more severe than the VaR level (VaR[alpha]). In this sense, [alpha] and (1 - [alpha]) are, respectively, the confidence level and the coverage rate of VaR. VaR has a nonconstructive definition and, on this way, different approaches can be performed for its forecasting, requiring a specific time window of analysis. VaR presents the worst best case scenario, but risk measures are expected to say something about "worst worst" case scenarios, specially in the cases of financial assets, where fat tails are expected.

Generally, the methods to estimate VaR can be separated in three groups: the parametric approach, nonparametric methods and semiparametric modeling. In a parametric point of view, once a certain distribution is assumed for the returns, VaR is simply the (1 - [alpha])-th quantile of the distribution. Three basic inputs are needed to calculate VaR: (i) the confidence level a, (ii) the estimation window (WE) and (iii) a conditional distribution assumption for the returns. Large banks and financial institutions are often obliged to use [alpha] = 99% (see, e.g., Banco Central do Brasil (2009), Kou et al. (2013)), while traders and small businesses have no such restrictions. In the literature, [alpha] = 95% and [alpha] = 99% are the most common choices.

Based on historical data, a parametric approach to estimate VaR depends on to assume a certain conditional parametric probability distribution, Pr, for the asset returns. Once we choose the distribution and estimate the appropriate parameters, the (1 - [alpha])-th quantile will be the asset VaR[alpha]. By considering a financial demeaned-return series {[R.sup.dm.sub.t]} generated by the probability law Pr([R.sup.dm.sub.t] [less than or equal to] [VaR.sup.t.sub.[alpha]]|[F.sub.t-1]) = [F.sub.t]([VaR.sup.t.sub.[alpha]]), which is conditional on the information set [F.sub.t-1] ([sigma]-field) at time t - 1. Suppose {[R.sup.dm.sub.t]} follows the stochastic process [R.sup.dm.sub.t] = [[mu].sub.t] + [u.sub.t] = [[mu].sub.t] + [[sigma].sub.t][z.sub.t], where [[mu].sub.t] = E([R.sup.dm.sub.t]|[F.sub.t-1]) = 0, [[sigma].sup.2.sub.t] = E([u.sup.2.sub.t]|[F.sub.t-1]). Thus, the processes {[z.sub.t]} = {[u.sub.t]/[[sigma].sub.t]} has the conditional distribution function [G.sub.t](z) = Pr([z.sub.t] [less than or equal to] z|[F.sub.t-1]). The VaR with a given tail probability (1 - [alpha]) [member of] (0,1), denoted by [VaR.sup.t.sub.[alpha]], is defined as the conditional quantile such as [F.sub.t]([VaR.sup.t.sub.[alpha]]) = (1 - a[alpha] which can be estimated by inverting the distribution function: [VaR.sup.t.sub.[alpha]] = [F.sup.-1.sub.t](1 - [alpha]) = [[sigma].sup.t][G.sup.-1.sub.t](1 - [alpha]). Hence, a VaR model involves the specification of [F.sub.t][??], [[mu].sub.t], [[sigma].sub.t] and [G.sub.t][??]. Here, [VaR.sup.t.sub.[alpha]] is a risk measure depending on the volatility ([[sigma].sub.t]) (see e.g., Danielsson (2011)).

Considering a return series, it is possible to estimate VaR at period t by using a GARCH model. To do this, let [[sigma].sub.t] be the volatility at t-period according to the GARCH(p, q) described by (1). The corresponding equation for the eGARCH(p,q) is described in (3).

GARCH-family usually performs very well and thus it is present in almost all published VaR study. GARCH models have been extended by many different methods (Bollerslev, 1986). The estimation of VaR based on the assumption that the returns of a stock prices series follows a Gaussian probability process has well-known deficiencies, among which the most serious is the lack of information provided in extreme events, which demands fat tails on the return probability distribution behavior, which a Normal distribution can not model suitably. Other methods of VaR estimation can be used, for example, the GARCH-VaR class (Hung et al., 2008).

Engle and Patton (2001) present a broad review of volatility and discuss how it should be modeled properly. Beyond traditional ARCH and GARCH models, Zakoian (1994) and Nelson (1991b) proposed improvements to model conditional volatility: exponential GARCH (eGARCH) and threshold ARCH (TARCH). Empirical studies based on TARCH using twelve years of daily Dow Jones Industrial Index thats not indicate any persistence of conditional volatility in the chosen data sample. Angelidis et al. (2004) survey several variations of GARCH modeling and they conclude that leptokurtic distributions, especially Student-t, are more adequate than the Normal distribution to be used in ARCH and GARCH-family models. Also, the authors perceived that the data sample window plays a crucial role when measuring VaR forecast performance: simple models and low confidence levels benefit from smaller windows (in their experiment, smaller than 2000 observations). In a recent paper, Paolella and Polak (2015) proposed a hybrid GARCH model which allows to deal with volatility clustering, non-normality, and also dynamics in the dependency between assets over time.

In our study, we decided to model the data by using the usual GARCH modeling because of its computational implementation which seems to be easier than the proposal in Paolella and Polak (2015). It is important to note that, for our proposal, the GARCH-VaR and the proposed coverage tests (Christoffersen, 1998, Candelon et al., 2010) were sufficient to identify the different risk patterns on the Brazilian sectoral stock indices.

Despite the great variety of different GARCH models, GARCH(1,1) is often cited as the best performing volatility model. Hansen and Lunde (2005) analyzed 330 different volatility models, including eGARCH and TGARCH, and find no conclusive evidence that any of them perform significantly better than GAR- CH(1,1). They also note that ARCH(1) is outperformed by most models.

In a nonparametric approach, the probability density function (PDF) of returns is obtained empirically by the nonparametric estimation of the density such as kernel estimation, for example, or by using some resampling technique such as bootstrap, jackknife or permutation. A simple nonparametric VaR estimation method is the historical simulation that uses only the empirical distribution of the returns. In this case, the returns are split up into several long subsamples with the same length. For example, considering N and n as the original sample size and the subsample common length, we can get (N - n + 1) subsamples. For each subsample, we can pick the (1 - [alpha])-th quantile (the [VaR.sub.[alpha]] for each subsample). Thus, the [VaR.sub.[alpha]] estimation for the next period will be the mean of these partial VaRs. There are many advantages to historical approach: simplicity, robustness, suitable performance when estimating high-tolerance on VaR and non-dependence on parametric assumptions (Beder, 1996, Abad et al., 2014). The fact that historical methodology uses only the observed data values is also a weakness, for example, return values that are not in the sample are not considered. As a result, the further from the mean we get, less (and less) observations are available for the simulation analysis (Danielsson and De Vries, 2000) increasing the bias of the estimated VaR. Hence, historical simulation is a poor choice when having a small sample size or to estimate VaR value far into the tail (e.g., [VaR.sub.99%]). Another disadvantage of the historical method comes from the fact that it attributes equal weights to every period, which makes it react very slowly to abrupt changes in asset volatility, or change abruptly when an old observation exits in the estimation window (van den Goorbergh et al., 1999, p. 22). This problem is specially severe when using longer time windows to make predictions, which is the case of most banks and large financial institutions.

Beyond the traditional purely-parametric and purely-non-para metric methodologies, some so-called semi-parametric models have been proposed in the last decade. Conditional Autoregressive Value at Risk by Regression Quantiles (CAViaR), a semi-parametric method that uses quantile regression to estimate model parameters, proposed by Manganelli and Engle (2001), Engle and Manganelli (2004), shows promising results. CAViaR has been found to perform well with heavy-tailed data by other studies. While traditional parametric VaR methodologies attempt to model the entire distribution, CAViaR models the VaR quantile directly. It can be an advantages of these method.

There has been a vast literature comparing different methods of estimation VaR being published over the last 20 years. Hendricks (1996) published a comprehensive report evaluating 12 different VaR methodologies concluding that several methods produce suitable results. van den Goorbergh et al. (1999) present a comparison of many different VaR methods using Dutch stock market index and Dow Jones Industrial Average data. The authors emphasize the importance of volatility clustering, pointing out that they are successfully estimated by GARCH models even in extreme quantiles (up to 0.01%). Abad et al. (2014) present a broad overview of VaR techniques and papers that compare different methodologies and present some different benchmarking measures.

It is important to mention a limitation of using the VaR as a risk measure: VaR satisfies the subadditivity property in the case of gaussian returns, but it is not true when considering other estimation strategies like the GARCH-VaR mentioned above (Artzner et al., 1999) . In this sense, Danielsson et al. (2013) suggest that VaR violates subadditivity only when the tail index estimate (see, e.g., Hill (1975), Pickands III et al. (1975), Dekkers et al. (1989)) exceeds 2. This fact describes a super fat tail behavior.

* VaR Backtesting: Value-at-RIsk estimation depends on the following features: data set, confidence level ([alpha]), estimation window (WE) and distribution parameters (if any). The backtesting procedure in a value at risk analysis is a statistical technique based on a comparison between the predicted losses from the calculated value at risk with the realized losses around a specified time horizon. Through this comparison we can investigate the quality of VaR estimates. If the backtesting results are not accurate, we can recalculate the VaR predictions with more appropriated inputs, which may improve the risk estimation.

In backtesting VaR we can choose WE by considering a violation ratio and a conditional coverage diagnosis. Violation ratio (VR) is the proportion of actual VaR violations (i.e., when the observed return is, in absolute value, higher than the VaR estimate for the corresponding t-period) compared with the expected number of violations (Danielsson, 2011, p. 145) when considering all the period of an one-step-ahead VaR prediction; e.g., for [VaR.sub.95%] and 100 VaR predictions, we expect 5 VaR violations. Thus, to compute VR we must compare the violations observed in the time window of the VaR predictions with that five expected. The ideal violation ratio is 1, which means that data exceeded the VaR threshold exactly as many times as was expected. Values lower than 1 indicate that there is fewer violations than expected, and thus, more conservative VaR estimate. In the opposite direction, VR > 1 indicates more violations than expected, and thus, the VaR model understates the level of risk.

It is difficult to get VR exactly equal to one, but it is possible to define an acceptable region for this indicator. In order to get the best possible choice for VR, we decided to define a region according to that suggested by Danielsson (2011), who highlights that models yielding VR < 0.5 or VR > 1.5 may be considered as imprecise.

To guarantee a statistical confidence for the estimated VR, we used the unconditional coverage (UC), conditional coverage (CC) and independence (IND) tests proposed by Christoffersen (1998) ([LR.sub.UC], [LR.sub.IND] and [LR.sub.CC] statistics) and Candelon et al. (2010) ([J.sub.UC] and [J.sub.IND] statistics). The [J.sub.UC] and [J.sub.IND] statistics are used in a new duration-based backtesting procedure which considers a geometric distribution assumption.

Denoting by [II.sub.[alpha]] the indicator function which is equal to 1 when there is a [VaR.sub.[alpha]] violation, the [LR.sub.UC] statistic is used to test [H.sub.0] : E([II.sub.[alpha]]) = 1 - [alpha], where E(.) denotes the expected value. The [J.sub.UC] statistic is used to test [H.sub.0] : E(d) = 1/(1 - [alpha]), where d is the the duration between two VaR violations (which is assumed to be geometrically distributed under null hypothesis). Suppose that the violation process [II.sup.t.sub.[alpha]] can be represented as a Markov chain with two states: [mathematical expression not reproducible], where the transition probabilities are [[pi].sub.ij] = Pr([II.sup.t.sub.[alpha]] = j|[II.sup.t-1.sub.[alpha]] = i). The null hypothesis of independence of VaR violations is defined as [mathematical expression not reproducible] and can be tested by using the [LR.sub.IND]-statistic (Christoffersen, 1998).

In literature, another statistics can be used to evaluate VaR violations, for example, the [J.sub.IND]-statistic can be used to test the null hypothesis of geometric distribution for the VaR violations (considering absence of dependence) [H.sub.0] : E(d) = 1/(1 - [beta]) with (1 - [beta]) not necessarily equal to coverage rate (1 - [alpha]) and the [LR.sub.CC]-statistic is used to test both the unconditional and the independence coverage in a join test on the VaR violations (Christoffersen, 1998).

In large sample sizes and under the corresponding null hypotheses, the statistics [mathematical expression not reproducible] and [mathematical expression not reproducible], where p is the number of orthornormal polynomials used as moment conditions (see Candelon et al. (2010)) and D represent the converges in distribution. Here, we used p = [p.sub.1] = 3 ([J.sub.IND1]-statistic) and p = [p.sub.2] = 5 ([J.sub.IND2]-statistic). These statistical approach have helped us to evaluate the quality of the VaR backtesting. VaR estimated series which a good VR performance and also without null hypotheses rejection were considered the best choices.

3. Empirical Results and Discussion

3.1 Exploratory Data Analysis for BOVESPA Index

Tukey (1977) sets the basis for the Exploratory Data Analysis (EDA), which is the art of seeking for relevant information within the data with the least possible distributional assumptions about the underlying process. It is frequently based on graphical representations. We carried an EDA from the returns series of the Brazilian's benchmark stock index (IBOV) and run a goodness of fit analysis by using Normal (non standard), Student-t, and Skew Student-t (Azzalini and Capitanio, 2003, Lima and Neri, 2007, Mokni et al., 2009) distributions to model the returns. Here, we considered two volatility scenarios, namely: low historical volatility (Range 1) and high historical volatility (Range 2). The Range 1 describes a three-years IBOV returns before the financial crisis in 2008 (a less volatile period) and Range 2 refers to the two-years returns during the crisis (more volatile period). The two volatility cases are described in Table 1. We have found empirical signals according to literature in the sense that heavy tailed distributions are useful to model our data. Similar EDA analysis were made considering others Brazilian sectoral stock indices and the results are available at https://goo.gl/0VwYXZ.

In our investigation all parameters of the considered distributions were estimated from the data sample via maximum likelihood method. Here, we are referring to a non-standard Normal distribution with mean and standard deviation estimated from the data sample.

Figure 1 shows the histogram along with a standard Normal and a non-standard Normal density, and both overlayed with a fitted Student-t density for the IBOV returns during Range 1. In general we note that the Student-t distribution adjusts better to the data.

Figure 2 shows (non-standard) Normal, Student-t and Asymmetric Student-t (AST) fitted curves with the corresponding QQ plots for the IBOV sample data during Range 1. As we can see in Figure 3a, the (non-standard) Normal distribution was not a good choice to properly adjust to the data. Student-t (Figures 3c and 3d) is a clear improvement over Normal, however it is unable to deal with the data skewness which can be smoothly noted by looking at the histogram and QQ plots: specifically in Figure 3b, where we can identify a slightly left-skewed data distribution by looking at a few extreme observations in QQ plot curve's left tail. This problem is only slightly mitigated by the AST distribution (3e and 3f). The skewness can be visually identified also by comparing AST's curve with the superimposed Student-t curve in Figure 3e. IBOV graphical analysis using Range 2 can be seen in Figure 3. Again, the Student-t and AST distributions present the best fits. Normal QQ plot (Figure 4b) shows some evidence about the high kurtosis of our sample which presents outliers at the extreme returns. In this case, similar to observed in Range 1, Student-t and AST distributions present similar performances (Figure 4e).

In this paper, our interest relies on to analyze the VaR time series of the main Brazilian sectoral stock indices. The analysis made in this subsection gave us a strategical modeling motivation: we must deal with heavy tail distribution to improve the estimation process. We can also conclude that, in general, the return distributions present symmetric shape, which could help us to choose the better modeling approach. Another point we must consider and worry about is the modeling computational implementation which can results in several difficulties depending on the backtesting approach used in the analysis. In the next subsection we made a VaR backtesting by using a GARCH(1,1) model. GARCH(1,1) is consistently found to outperform other more complex models (Hansen and Lunde, 2005). The model is also the most widely used by specialists to model volatility of daily returns; perhaps excessively (Francq and Zakoian, 2011). In the analysis, we used the Normal-GARCH (NGARCH), t-Student-GARCH (tGARCH), and exponential t-Student-GARCH (teGARCH) approaches to model VaR by considering [alpha] = 99% (left tail), and [alpha] = 1% (right tail). We can justify our choice by focusing on the observed heavy tails of the returns empirical distributions and also by the computational difficulties presented when we tried to analyze the VaR with others more complex methods like the GJR GARCH models (Glosten et al., 1993).

3.2 Backtesting

In this subsection we chose the NGARCH (1,1), tGARCH (1,1) and teGARCH (1,1) VaRs as our Value-at-Risk estimation methods. As we mentioned throughout this paper, there are many other VaR methodologies that might be adequate (e.g., Nelson (1991a), Glosten et al. (1993), Chernozhukov (2005), Paolella and Polak (2015)); however, depending on the backtesting strategy used to test the VaR adequacy, computational difficulties can distort the estimated VaRs. Furthetmore, the NGARCH-VaR, tGARCH-VaR and teGARCH-VaR have presented as good methods to get our goals on discriminating the Brazilian sectoral stock indices.

We exploited VaR estimates from the Bovespa (IBOV) index returns as well as the Brazilian sectoral indices series, namely, consumption (ICON), industrial (INDX), realty and construction (IMOB), financial (IFNC), basic materials (IMAT), electrical (IEEX) and utilities (IUTIL). The sample period is from 2012-02-27 to 2017-06-07 (a few more than five years: 1306 trading days). Table 2 shows some summarized statistics from the data. As we can see, the sample means of the returns are close to zero, the IUTIL and IEEX present the highest sample skew coefficients (slight negative asymmetry), followed by IMOB and IMAT (slight positive asymmetry). We applied the Jarque and Bera (1987) test and the null hypothesis of normality was rejected for all series. In the sixth column, the positive excesses of kurtosis show how the series present heavy tails. The tail index (seventh column) was estimated for each stock index according to Hill (1975). According to Danielsson et al. (2013), VaR violates subadditivity only when the tail index estimate exceeds 2 which describes a super fat tail behavior. As we can see, all the estimated values are less than 2 which is in favor to the use VaR as a coherent measure in our case.

Table 3 shows the Autocorrelation (ACF), [[rho].sub.i], i = 1, ..., 5, and Partial Autocorrelation Functions (PACF), [a.sub.i], i = 1, ..., 5 for the returns and their squares. The ACF and PACF coefficients are not significant in almost all cases when considering each sectoral index return. This fact is in agreement with the principle that returns are not predictable (Issler, 1999). The corresponding coefficients for the squared returns show a different scenario where a lot of coefficients are significant. This is a signal in favor of the predictability of the conditional variance of returns.

The analyzed period allowed us to investigate the VaR series focusing on three recent critical events that happened in Brazil: (1) the electoral period in second semester of 2014; (2) the impeachment in the first semester of 2016; and (3) the recent political event that happened in the second quarter of 2017 (2017-05-18), which was called as 'Joesley day', when the Brazilian's president Michel Temer was cited in a criminal investigation involving the CEOs of the Brazilian's company JBS. In our study, we focused on the 1% VaRs ([VaR.sub.99%]) (left tail values) and the 99% VaR ([VaR.sub.1%]) (right tail levels). In this sense, the VaRs located on the left tail may be used in trading long positions and, in the opposite direction, the values on the right tail can be used in short positions. We implemented an event study analysis based on the cumulative returns of the eight Brazilian stock indices. The columns 1, 2 and 3 of the Table 4 shows the dates when each stock index registered its minimal return in the years corresponding to the events 1 to 3 (2014, 2016 and 2017, respectively). The fourth column reffers to the event study corresponting to the period (2) by considering the maximum returns. In this sense, it is important to note that during the impeatchment process in Brazil there was a general increase in the prices of all Brazilian assets and, to realize the event study for this case, we think it is interesting to add an investigation of the extreme positive returns. In the event (1), the IBOV and IFNC; and the ICON, INDX, IMOB and IMAT fell down to the minimal return in the same day of 2014 (The first two on September 29th and the other four on December 1st). In the event(2), considering the negative returns, ICON and INDX; and, the IMOB, IEEX and IUTIL registered the same dates (January 12th and November 10th, respectively). In the other case (positive returns), IBOV, IMOB and IFNC; and, ICON, INDX, IEEX and IUTIL presented the same dates (March 17th and January 29th, respectively). In the event (3), almost all indices registered the minimal return on the same date (the 'Joesley day'). Figure 4 shows the results of the event study following the bootstrap inference described in Patnaik et al. (2013). In the Figure, each chart shows the estimates for the expected cumulative change on the average of the cumulative returns (considering all the sectoral indices) within a 10-period event window. The bootstrap samples were constructed with replacement and the procedure of sampling was repeated 1000 times so that it was possible to obtain the full distribution of the average of the cumulative returns and thus to estimate the expected (cumulative) change in the cumulative return of an equal weighted portfolio composed by all Brazilian sectoral indices. The two bonds showed in the graphs are the 95% bootstrap confidence intervals of the estimated values. Figures 5a to 5c describe the events (1) to (3). Figure 5d corresponds to the analysis of positive returns in the event (2). Figure 5a shows that the in 2014 the significant changes occurred from five days after the event (on average). Table 4 shows that almost all sectoral indices reveled minimal returns in the second semester of 2014, more precisely, 5 of 8 indices presented the critical return a few days after the presidential election. Figures 5b and 5d show the results for the event (2). As we can see, there is no significant changes for the negative returns. However, the analysis with positive values reveled significant changes before (a few days) and after the event (on average). In Table 4 we can see that all stock indices reached maximal return in the first half of the year. The event (3) is described by the Figure 5c. As we can see, the significant changes were registered before and after the event that occurred on May 18th, 2018. Table 4 (column 5) shows that almost all of the indices reached the minimum return at the 'Joesley day'. Since the (1) to (3) events presented as much critical, it motivated our attention in the [VaR.sub.1%] and [VaR.sub.99%] estimates.

In VaR estimation we chose the time window (WE) according to the Backtests based on the VaR violations (VR) and the coverage tests proposed by Christoffersen (1998) and Candelon et al. (2010). In this sense, we did not decide to fix the estimation window for all analyzed indices. As an example, for the INDX index we chose WE = 420 trading days and, for the IEEX, we chose WE = 360. We highlight that this flexibility of choosing the estimation windows allowed us adjusting VaR better than when considering the same time window for all analyzed stocks. By considering the different volatility patterns between all the sectoral indices we could improve the quality of the backtests which favored the estimation of more realistic VaR levels.

The procedure of backtesting consist in choosing the time window (WE) and forecast the one step ahead VaRs from WE+1 to the end. If we consider one stock index, and, for example, a GARCH-VaR(1,1) modeling following the equation (2), several estimates for the parameters 70,q1, and 131 are obtained in a backtesting analysis. Table 5 shows the mean and standard deviation of the estimated parameters considering the GARCH-VaR(1,1) backtesting procedures (NGARCH, tGARCH and teGARCH) for all sectoral indices. The values in upper part refer to IBOV, ICON, INDX and IMOB. The values in lower part correspond to IFNC, IMAT, IEEX and IUTIL. As we can see, the means corresponding to [[alpha].sub.1] (see equation (3)) are very close to zero in almost all cases. This fact can justify the results from Table 6 that show how, in general, tGARCH estimations perform equivalent or better than the teGARCH approach.

In the following charts (Figures 5 to 12), we present a detailed analysis of the eight Brazilian sectoral indices (see Table 2). Each chart brings the analysis for one stock index. The bottom lines represent VaR estimates according to a long position. The graphs show the returns and [VaR.sub.99%] ([alpha] = 99%) estimates according to NGARCH(1,1), tGARCH(1,1) and teGARCH(1,1) specifications. A full horizontal black line at -5% represents a daily supported loss limit in an investment that uses a long position strategy. In the Figures 5 to 12, in order to show the same horizontal axis for all sectoral stocks, we start the time horizon by considering a period of 480 trading days from February 2nd, 2012. This period is equal to the maximal time window (WE) that was chosen after IMOB's VaR backtesting. The Table 6 contains the summary statistics of the analysis in Figures 5 to 12 by considering the 1% and 99% VaRs ([alpha] = 99% and 1%, respectively). The table indicates the method (first column), the sample-VaR means (second column), and the violation ratios (third column). The p-values corresponding to the coverage tests (based on [LR.sub.UC], [LR.sub.IND], [LR.sub.CC], [J.sub.UC], [J.sub.IND1] and [J.sub.IND2]-statistics, respectively) are shown from the fourth to ninth columns . The estimation window (WE) is indicated on the right side of the corresponding stock's symbol. The three periods corresponding to the events (1) to (3) are highlighted in the charts (lower part of the figure).

We highlight that our analysis foccused on 1% and 99% VaR estimation. In this sense, we believe these risk levels are more adequated for our investigation. Since we used the LR and J statistics in our backtesting procedure, it is important to mention that, according to Candelon et al. (2010), the duration-based backtesting procedure focusing on J-statistics outperforms traditional duration based tests like that proposed in Christoffersen (1998). In addition, Gaglianone et al. (2011) highlight that the power of Christoffersen's tests for 1% VaR are better than when considering 5% VaR levels.

In general, the results from Table 6 suggest that tGARCH-VaR tends to present violation ratios less than that obtained from NGARCH-VaR. In the cases where VR > 1 for both GARCH specifications, the tGARCH-VaR understated the level of risk less than when using NGARCH-VaR. When both VRs are less than one tGARCH-VaR estimated more conservative risk levels. As expected, the tGARCH-VaR performed better than NGARCH-VaR. The tGARCH-VaR and teGARCH-VaR seem to be competitive. However, there were a few cases where the teGARCH-VaR did not perform well according to the coverage tests results (e.g., for ICON 1% VaR, for IFNC, IMAT 99% VaR).

According to the results in Table 6, regarding to the IBOV, the 1% NGARCH-VaR and tGARCH-VaR estimates are conservative (VRs < 1) while 1% teGARCH-VaR estimate presented understated the risk. Based on the VR criteria, the 99% VaR series understated the risk, but it was more expressive for the NGARCH-VaR estimates. By considering a 5% nominal level, the coverage tests did not reject their null hypotheses in any case.

Figure 5 shows the chart of the estimated VaR series for the IBOV. As we can see, VaR estimates presented very closed in events (1) and (3). In event (2), the teGARCH-VaR estimates presented a few different values. According to Table 6, in IBOV case, 1% teGARCH-VaRs presented the highest p-values for almost all coverage tests. The VaR violation at spike (3) presented a return close to -9%. As described in the event study from Table 4 and Figure 4, this event can be identified as an extreme event. For this period the minimal VaR values were estimated at about -6.18%, -6.23%, and -6.44% (1% NGARCH, tGARCH and teGARCH, respectively). In terms of daily loss risk, the IBOV presented VaR estimates above the 5% level in almost all estimation period.

ICON's returns and its estimated VaRs are plotted in Figure 6. According to Table 6, except to the 99% tGARCH-VaR, the estimates understated the true VaR. Excepting to 1% teGARCH-VAR, the coverage tests did not reject their null hypotheses. In this case, we can highlight that eGARCH modeling was not able to improve VaRs estimates. At least in terms of daily risk, the results suggest that this stock index presents less aggressive than the IBOV. To see this, notice that in the analyzed period only in the most severe period presented by the IBOV (event (3)), the ICON's 1% GARCH-VaR levels crossed the -5% level. The VaR violation in period (3) seems to be related to a more critical period. In fact, all indices presented a VaR violation at this point. In this spike, for the ICON, the estimated 1% GARCH-VaR were approximately -5.64%, -5.87% and -5.83% (NGARCH, tGARCH and teGARCH, respectively). These facts lead us to conclude that the Brazilian consumption companies have presented less aggressiveness daily risk levels when comparing with the Brazilian global index (IBOV). In fact, it is also true when we compare the ICON with the IMOB, IFNC, IMAT, IEEX and IUTIL indices (see Figures 5 to 12). Next paragraph concludes a similar result for the industrial index (INDX).

Figure 7 shows the chart for the INDX. As we can see, this index presents a good value at risk performance over all of the observed time window. We can highlight a particular behavior when comparing INDX with ICON: in the periods (1) to (3) the INDX VaR only touched the -5% line once (in event (3)). At this critical point, the VaRs estimates were -4.85%, -4.49 and -4.04% (NGARCH, tGARCH and teGARCH, respectively). Although the VaR violation in period (3), the minimum return observed did not exceed the -5% line. According to the results from Table 6, for the INDX case, the null hypotheses of the coverage tests were not rejected. For almost all coverage tests the highest p-values were presented when using tGARCH-VaR. In addition, in a risk point of view, our value at risk investigation suggest the INDX as a good investment choice. Comparing the INDX index with the others (Figures 5 to 12) we can highlight that, in a VaR point of view, the INDX was not too affected by the three critical political scenarios that happened in Brazil (events (1) to (3)). In this sense, INDX and ICON have presented reasonable 1% VaR levels.

IMOB's chart is shown in Figure 8. As we can see in Table 6, all null hypotheses were not rejected when using tGARCH-VaR. The teGARCH-VaR seems to be competitive with the tGARCH. All violation ratio values suggest understated risk measures. For this index, in event (3), the minimal 1% VaR estimates were close to -7.70%, -12.34% and -14.65% (NGARCH, tGARCH and etGARCH, respectively). This sector can be considered as an aggressive investment option when compared with the other indices. In terms of VaR, the realty and construction sector was heavy affected by the event (3) where the maximal loss presented a (negative) return higher than -10%.

IFNC's returns and its estimated VaRs are plotted in Figure 9. According to Table 6, for this index teGARCH-VaR and tGARCH-VaR presented as competitive methods when considering 1% VaRs estimates. In this case, the VR values are conservatives when using NGARCH and tGARCH approach and is very close to one for teGARCH-VaRs. Considering 99% VaRs, [J.sub.I]ND1 statistic suggest rejection of null hypothesis for tGARCH-VaR and teGARCH-VAR. Furthermore, it also happen for LRIND statistic in teGARCH-VaR estimation. When considering JiND2 statistic the null hypothesis was not rejected for NGARCH-VaR and tGARCH-VaR estimates. As we can see, this stock index was heavy affected by the volatility spikes in periods (1) to (3). It is reflected on 1% VaRs associated to IFNC which were heavy affected by these events. Before spikes (1) to (3) the VaR estimates remained above the -5% threshold for a long time. In the events (1) to (3), the minimal 1% VaRs were close to -9.07%,-9.44% and -8.3%2 (nGARCH, tGARCH and teGARCH, respectively). In event (3) a VaR violation happened. As can be seen in IMAT and IMOB (Figures 8 and 10, respectively) we can conclude that, beyond these sectors, the financial sector have contributed to the volatility increast on the Brazilian stock market. This fact is reflected by our VaR study.

It is important to note that when the market increases volatility, it can be good for non-long period trades, but it can not be comfortable when looking in a long-term point of view which the investors look for more stable investments. Because of that, based on long-period trades, the IFIX (and also IMAT and IMOB) can be classified as a more aggressive choices in a portfolio design.

Figure 10 shows the results for IMAT. According to our VaR estimates we can classify the IMAT behavior in a similar way to the IFNC and IMOB indices. Both of these indices presented the most aggressive daily risks. The 1% NGARCH-VaR, tGARCH-VaR and teGARCH-VAR reached minimal levels in period (2) (-8.17%, -8.51% and -9.61%, respectively). According to the Table 6, the coverage tests did not reject null hypotheses when considering 1% VaRs estimates. For this case, for almost all coverage tests, teGARCH-VaR conducted to the highest p-values. The tGARCH-VaR performed better for 99% VaRs estimation.

Figure 11 shows IEEX's chart. The volatility spikes in periods (1) and (2) presented 1% VaR levels around -6% and -7%, respectively. An intermediate VaR spike can be seen at the fourth quarter of 2016 (1% VaR [approximately equal to] -7%). For the IEEX, similar to the other indices, event (3) shows a VaR violation. In this period, IEEX's 1% VaRs reached -6.55%, -8.72 and -6.97 levels (NGARCH-VaR, tGARCH-VaR and teGARCH-VaR, respectively). The coverage tests did not reject null hypotheses for all GARCH-VaR models. In particular, tGARCH-VaR seems to perform better than the others for 1% and 99% VaRs estimates. In our VaR point of view, this index can be considered as aggressive, but not like the IMAT, IMOB and IFINC. On the other hand, we can not classify the IEEX as we made for the ICON and INDX.

Figure 12 shows the results considering the IUTIL. There are several financial assets in this index which belong to IEEX and this fact justifies the similarities between VaR analysis of these two securities. For the IUTIL, the VaR spikes in periods (1), (2) and (3) presented a 1% VaR close to -7%, -6% and -7.5, respectively. It is important to mention the minimal return at period (3) ([approximately equal to] -10%) when a VaR violation happened. For this event 1% NGARCH-VaR, tGARCH-VaR and teGARCH-VaR reached minimal levels at -6.67%, -7.46% and -6.41%, respectively. This fact reinforces the severity of period (3) on all the Brazilian sectors. As we made for IEEX, IUTIL can be considered as an aggressive sectoral index, but not as IFNC, IMOB and IMAT. Table 6 shows that NGARCH, tGARCH and teGARCH modeling seem to perform similar in the coverage tests (all null hypotheses were not rejected). The VRs from the tGARCH presented the best values for 1% VaR analysis.

4. Conclusion

In this paper we studied market risk and statistical properties of the returns of eight sectoral Brazilian stocks indices. We applied EDA on IBOV as a referential market index. On two different volatility scenarios the EDA suggest that heavy tail probability distributions seems to be more appropriated to model the empirical returns from this stock index. After several EDA showed in an supplementary material, we suggested to use the Student-t distribution as the conditional errors distribution in adjusting tGARCH and teGARCH models which performed very well to estimate VaR. The backtesting analysis helped us to identify successfully specific sectors of the Brazilian economy which presented better risk prospects according to VaR. In particular, we were able to identify the industrial and consumption sectors as less risk aggressive and the financial, realty and construction, and the basic material sectors as the most risk aggressive.

We must note that our work is limited to the realm of risk estimation. Despite our success in identifying low-risk stocks in the Brazilian market, we have not made any attempts to correlate risk estimate with projected returns or prospects of profitability. This topic can be explored in future works.

Acknowledgements

We gratefully acknowledge financial support from CNPq and FACEPE Brazil.

References

Abad, P., Benito, S., and Lopez, C. (2014). A comprehensive review of value at risk methodologies. The Spanish Review of Financial Economics, 12(1):15-32.

Angelidis, T., Benos, A., and Degiannakis, S. (2004). The use of garch models in var estimation. Statistical methodology, 1(1-2):105-128.

Artzner, P., Delbaen, F., Eber, J.-M., and Heath, D. (1999). Coherent measures of risk. Mathematical finance, 9(3):203-228.

Asfaha, T., Desmond, A. F., Hailu, G., Singh, R., et al. (2014). Statistical evaluation of value at risk models for estimating agricultural risk. Journal of Statistical and Econometric Methods, 3(1):13-34.

Azzalini, A. and Capitanio, A. (2003). Distributions generated by perturbation of symmetry with emphasis on a multivariate skew t-distribution. Journal of the Royal Statistical Society. Series B: Statistical Methodology, 65(2):367-389.

Banco Central do Brasil (2009). CIRCULAR No 3.478. Technical report, Banco Central do Brasil. http://www.bcb.gov.br/pre/ normativos/circ/2009/pdf/circ_3478_v3_l.pdf.

Beder, T. S. (1996). Report card on value at risk: high potential but slow starter. Bank Accounting and Finance, 10:14-25.

Bi, G. and Giles, D. E. (2009). Modelling the financial risk associated with U.S. movie box office earnings. Mathematics and Computers in Simulation, 79(9):2759-2766.

Bollerslev, T. (1986). Generalized autoregressive conditional heteroskedasticity. Journal of Econometrics, 31:307-327.

Bollerslev, T., Engle, R. F., and Nelson, D. B. (1994). Arch models. Handbook of econometrics, 4:2959-3038.

Brockwell, P. J. and Davis, R. A. (2002). Introduction to Time Series and Forecasting. Springer.

Candelon, B., Colletaz, G., Hurlin, C., and Tokpavi, S. (2010). Backtesting value-at-risk: a gmm duration-based test. Journal of Financial Econometrics, 9(2):314-343.

Chernozhukov, V. (2005). Extremal quantile regression. Annals of Statistics, pages 806-839.

Christoffersen, P., Hahn, J., and Inoue, A. (2001). Testing and comparing value-at-risk measures. Journal of empirical finance, 8(3):325-342.

Christoffersen, P. F. (1998). Evaluating interval forecasts. International economic review], pages 841-862.

Cont, R. (2001). Empirical properties of asset returns: stylized facts and statistical issues. Quantitative Finance, 1(2):223-236.

Danielsson, J. (2011). Financial risk forecasting: the theory and practice of forecasting market risk with implementation in R and Matlab, volume 588. John Wiley & Sons.

Danielsson, J. and De Vries, C. G. (2000). Value-at-risk and extreme returns. Annales d'Economie et de Statistique, pages 239-270.

Danielsson, J., Jorgensen, B. N., Samorodnitsky, G., Sarma, M., and de Vries, C. G. (2013). Fat tails, var and subadditivity. Journal of econometrics, 172(2):283-291.

Dekkers, A. L., Einmahl, J. H., and De Haan, L. (1989). A moment estimator for the index of an extreme-value distribution. The Annals of Statistics, pages 1833-1855.

Engle, R. and Patton, A. (2001). What good is a volatility model. Quantitative finance, 1:237-245.

Engle, R. F. (1982). Autoregressive Conditional Heteroscedasticity with Estimates of the Variance of United Kingdom Inflation. Econometrica, 50(4):987-1007.

Engle, R. F. and Manganelli, S. (2004). Caviar: Conditional autoregressive value at risk by regression quantiles. Journal of Business & Economic Statistics, 22(4):367-381.

Francq, C. and Zakoian, J.-M. (2011). GARCH models: structure, statistical inference and financial applications. John Wiley & Sons.

Gaglianone, W. P., Lima, L. R., Linton, O., and Smith, D. R. (2011). Evaluating value-at-risk models via quantile regression. Journal of Business & Economic Statistics, 29(1):150-160.

Ghysels, E., Harvey, A. C., and Renault, E. (1996). 5 stochastic volatility. Handbook of statistics, 14:119-191.

Giot, P. and Laurent, S. (2004). Modelling daily value-at-risk using realized volatility and arch type models. journal of Empirical Finance, 11(3):379-398.

Glosten, L. R., Jagannathan, R., and Runkle, D. E. (1993). On the relation between the expected value and the volatility of the nominal excess return on stocks. The journal of finance, 48(5):1779-1801.

Hansen, P. R. and Lunde, A. (2005). A forecast comparison of volatility models: does anything beat a garch(1,1)? Journal of Applied Econometrics, 20(7):873-889.

Hendricks, D. (1996). Evaluation of value-at-risk models using historical data. FRBNY Economic Policy Review, pages 39-70.

Hill, B. M. (1975). A simple general approach to inference about the tail of a distribution. The Annals of Statistics, pages 1163-1174.

Holton, G. A. (2002). History of Value-at-Risk. Citeseer.

Hung, J. C., Lee, M. C., and Liu, H. C. (2008). Estimation of value-at-risk for energy commodities via fat-tailed GARCH models. Energy Economics, 30:1173-1191.

Issler, J. V. (1999). Estimating and forecasting the volatility of brazilian finance series using arch models. Brazilian review of econometrics, 19(1):5-56.

Jarque, C. M. and Bera, A. K. (1987). A test for normality of observations and regression residuals. International Statistical Review/Revue Internationale de Statistique, pages 163-172.

Jorion, P. (2001). Value At Risk: The New Benchmark For Managing Financial Risk. McGraw-Hill.

Khindanova, N., Svetlozar, and Rachev, T. (2000). Value at risk: Recent advances. In Handbook on Analytic-Computational Methods in Applied Mathematics. CRC Press LLC.

Kou, S., Peng, X., and Heyde, C. C. (2013). External risk measures and Basel accords. Mathematics of Operations Research, 38(3):393-417.

Lima, L. R. and Neri, B. P. (2007). Comparing Value-at-Risk Methodologies. Brazilian Review of Econometrics, 27:1-25.

Manganelli, S. and Engle, R. F. (2001). Value at Risk Models in Finance. Technical report, European Central Bank.

Mokni, K., Mighri, Z., and Mansouri, F. (2009). On the Effect of Subprime Crisis on Value-at-Risk Estimation: GARCH Family Models Approach. International Journal of Economics and Finance, 1(2):p88.

Morgan, J. (1996). Reuters (1996). RiskMetrics Technical Document. Retrieved from the World Wide Web: www.jpmorgan.com.

Nelson, D. B. (1991a). Conditional heteroskedasticity in asset returns: A new approach. Econometrica: Journal of the Econometric Society, pages 347-370.

Nelson, D. B. (1991b). Conditional heteroskedasticity in asset returns: a new approach. Econometrica, 59(2):347-370.

on Banking Supervision, B. C. (2009). Revisions to the Basel II market risk framework. Consultative Document, Bank for International Settlements.

on Banking Supervision, B. C. (2013). Fundamental review of the trading book: A revised market risk framework. Consultative Document, Bank for International Settlements.

Paolella, M. S. and Polak, P. (2015). Comfort: A common market factor non-gaussian returns model. Journal of Econometrics, 187(2):593-605.

Patnaik, I., Shah, A., and Singh, N. (2013). F oreign investors under stress: Evidence from india. International Finance, 16(2):213-244.

Pickands III, J. et al. (1975). Statistical inference using extreme order statistics. the Annals of Statistics, 3(1):119-131.

Schwert, G. W. (2003). Chapter 15 anomalies and market efficiency. In Financial Markets and Asset Pricing, volume 1, Part B of Handbook of the Economics of Finance, pages 939-974. Elsevier.

Tsay, R. S. (2005). Analysis of financial time series. Wiley series in probability and statistics. Wiley-Interscience, Hoboken (N.J.).

Tukey, J. W. (1977). Exploratory Data Analysis. Addison-Wesley.

van den Goorbergh, R. W., Vlaar, P. J., et al. (1999). Value-at-Risk analysis of stock returns historical simulation, variance techniques or tail index estimation? De Nederlandsche Bank NV.

Webby, R. B., Adamson, P. T., Boland, J., Howlett, P. G., Metcalfe, a. V., and Piantadosi, J. (2007). The Mekong-applications of value at risk (VaR) and conditional value at risk (CVaR) simulation to the benefits, costs and consequences of water resources development in a large river basin. Ecological Modelling, 201(1):89-96.

Zakoian, J.-M. (1994). Threshold heteroskedastic models. Journal of Economic Dynamics and Control, 18:931-955.

Wilton Bernardino *

Leonardo Brito **

Raydonal Ospina ([dagger])

Silvio Melo ([double dagger])

Submetido em 19 de abril de 2018. Reformulado em 16 de outubro de 2018. Aceito em 7 de novembro de 2018. Publicado on-line em 18 de janeiro de 2019. O artigo foi avaliado segundo o processo de duplo anonimato alem de ser avaliado pelo editor. Editor responsavel: Marcio Laurini.

* Departamento de Ciencias Contabeis e Atuariais, Universidade Federal de Pernambuco, Recife, PE, Brazil.E-mail: wiltonrecc@gmail.com

** Centro de Informatica, Universidade Federal de Pernambuco, Recife, PE, Brazil. E-mail: lmpb@cin.ufpe.br

([dagger]) Departamento de Estatistica, CAST - Computational Agriculture Statistics Laboratory, Universidade Federal de Pernambuco, Recife, PE, Brazil. E-mail: raydonal@de.ufpe.br

([double dagger]) Centro de Informatica, Universidade Federal de Pernambuco, Recife, PE, Brazil. E-mail: sbm@cin.ufpe.br

Caption: Figure 1 Histogram with Standard Normal density (a) and Non-standard Normal density (b) plotted (solid curves) of IBOV returns during Range 1. A fitted Student-t density (dotted curves) is overlayed in both panels.

Caption: Figure 2 IBOV data during range 1. Histogram of returns with fitted Normal, Student-t and AST PDF curves respectively in (a), (c) and (e). Quantile-Quantile plots using the same distributions in (b), (d) and (f).

Caption: Figure 3 IBOV data during range 2. Histogram of returns with fitted Normal, Student-t and AST PDF curves respectively in (a), (c) and (e). Quantile-Quantile plots using the same distributions in (b), (d) and (f).

Caption: Figure 4 Event study analysis.

Caption: Figure 5 IBOV VaR backtest, from 2014-02-05 to 2017-06-07.

Caption: Figure 6 ICON VaR backtest, from 2014-02-05 to 2017-06-07

Caption: Figure 7 INDX VaR backtest, from 2014-02-05 to 2017-06-07

Caption: Figure 8 IMOB VaR backtest, from 2014-02-05 to 2017-06-07

Caption: Figure 9 IFNC VaR backtest, from 2014-02-05 to 2017-06-07.

Caption: Figure 10 IMAT VaR backtest, from 2014-02-05 to 2017-06-07.

Caption: Figure 11 IEEX VaR backtest, from 2014-02-05 to 2017-06-07.

Caption: Figure 12 IUTIL VaR backtest, from 2014-02-05 to 2017-06-07.

Table 1 Date ranges used in the experiments. Standard deviation is used to measure volatility. Range Date range Characteristics S.D. (1) 1 [??]01/01/2004 Relatively 1.675 31/12/2007[??] low volatility 2 [??]01/01/2008 High volatil- 2.684 31/12/2009[??] ity Table 2 Summarized statistics of the sectoral indices. mean unconditional min max skew kurtosis sd excess IBOV (Ample index) -2.108e-16 0.014 -0.088 0.065 0.056 1.458 ICON (Consumption) -1.667e-16 0.011 -0.072 0.052 -0.122 2.170 INDX (Industrial) 2.829e-16 0.011 -0.050 0.050 0.086 1.158 IMOB (Realty and construction) 4.042e-16 0.015 -0.134 0.057 -0.368 4.188 IFNC (Financial) 3.059e-17 0.0165 -0.116 0.096 0.060 3.521 IMAT (Commodities) -3.110e-16 0.017 -0.073 0.070 0.236 0.830 IEEX (Electrical sector) 2.894e-16 0.013 -0.10 0.051 -0.643 4.181 IUTIL (Utilities) 1.012e-16 0.014 -0.104 0.050 -0.681 4.143 mean tail index IBOV (Ample index) -2.108e-16 0.324 ICON (Consumption) -1.667e-16 0.361 INDX (Industrial) 2.829e-16 0.315 IMOB (Realty and construction) 4.042e-16 0.361 IFNC (Financial) 3.059e-17 0.355 IMAT (Commodities) -3.110e-16 0.337 IEEX (Electrical sector) 2.894e-16 0.422 IUTIL (Utilities) 1.012e-16 0.414 Table 3 ACF and PACF of returns and squared returns. Returns [[rho].sub.i]([a.sub.i]) / index IBOV ICON [[rho].sub.1]([a.sub.1]) 0.000(0.000) 0.004(0.004) [[rho].sub.2]([a.sub.2]) -0.006(-0.006) 0.017(0.017) [[rho].sub.3]([a.sub.3]) -0.029(-0.029) -0.066(-0.066) [[rho].sub.4]([a.sub.4]) -0.038(-0.038) -0.031(-0.031) [[rho].sub.5]([a.sub.5]) 0.040(0.040) 0.033(0.036) 2/[square root of (T)] 0.0553 0.0553 [[rho].sub.i]([a.sub.i]) / index IFNC IMAT [[rho].sub.1]([a.sub.1]) 0.010(0.010) 0.038(0.038) [[rho].sub.2]([a.sub.2]) 0.005(0.005) -0.011(-0.012) [[rho].sub.3]([a.sub.3]) -0.053(-0.053) -0.015(-0.014) [[rho].sub.4]([a.sub.4]) -0.036(-0.035) -0.017(-0.016) [[rho].sub.5]([a.sub.5]) 0.022(0.024) 0.024(0.025) 2/[square root of (T)] 0.0553 0.0553 Squared returns [[rho].sub.i]([a.sub.i]) / index IBOV ICON [[rho].sub.1]([a.sub.1]) 0.031(0.031) 0.048(0.048) [[rho].sub.2]([a.sub.2]) 0.101(0.100) 0.104(0.102) [[rho].sub.3]([a.sub.3]) 0.046(0.041) 0.118(0.110) [[rho].sub.4]([a.sub.4]) 0.052(0.040) 0.013(-0.005) [[rho].sub.5]([a.sub.5]) 0.048(0.038) 0.027(0.004) 2/[square root of (T)] 0.0553 0.0553 [[rho].sub.i]([a.sub.i]) / index IFNC IMAT [[rho].sub.1]([a.sub.1]) 0.054(0.054) 0.103(0.103) [[rho].sub.2]([a.sub.2]) 0.126(0.123) 0.106(0.096) [[rho].sub.3]([a.sub.3]) 0.049(0.037) 0.060(0.041) [[rho].sub.4]([a.sub.4]) 0.038(0.019) 0.053(0.035) [[rho].sub.5]([a.sub.5]) 0.076(0.064) 0.118(0.102) 2/[square root of (T)] 0.0553 0.0553 Returns [[rho].sub.i]([a.sub.i]) / index INDX IMOB [[rho].sub.1]([a.sub.1]) -0.015(-0.015) 0.033(0.033) [[rho].sub.2]([a.sub.2]) -0.016(-0.017) 0.023(0.021) [[rho].sub.3]([a.sub.3]) -0.062(-0.062) -0.037(-0.038) [[rho].sub.4]([a.sub.4]) -0.032(-0.035) -0.051(-0.049) [[rho].sub.5]([a.sub.5]) 0.038(0.034) 0.023(0.028) 2/[square root of (T)] 0.0553 0.0553 [[rho].sub.i]([a.sub.i]) / index IEEX IUTIL [[rho].sub.1]([a.sub.1]) 0.080(0.080) 0.057(0.057) [[rho].sub.2]([a.sub.2]) 0.021(0.014) -0.005(-0.008) [[rho].sub.3]([a.sub.3]) -0.038(-0.041) -0.054(-0.053) [[rho].sub.4]([a.sub.4]) -0.072(-0.067) -0.084(-0.078) [[rho].sub.5]([a.sub.5]) 0.042(0.055) 0.047(0.056) 2/[square root of (T)] 0.0553 0.0553 Squared returns [[rho].sub.i]([a.sub.i]) / index INDX IMOB [[rho].sub.1]([a.sub.1]) 0.031(0.031) 0.056(0.056) [[rho].sub.2]([a.sub.2]) 0.073(0.072) 0.123(0.120) [[rho].sub.3]([a.sub.3]) 0.072(0.068) 0.035(0.022) [[rho].sub.4]([a.sub.4]) 0.024(0.015) 0.057(0.040) [[rho].sub.5]([a.sub.5]) 0.069(0.059) 0.034(0.023) 2/[square root of (T)] 0.0553 0.0553 [[rho].sub.i]([a.sub.i]) / index IEEX IUTIL [[rho].sub.1]([a.sub.1]) 0.127(0.127) 0.145(0.145) [[rho].sub.2]([a.sub.2]) 0.084(0.069) 0.078(0.058) [[rho].sub.3]([a.sub.3]) 0.022(0.003) 0.018(-0.000) [[rho].sub.4]([a.sub.4]) 0.043(0.035) 0.044(0.038) [[rho].sub.5]([a.sub.5]) -0.003(-0.014) 0.002(-0.009) 2/[square root of (T)] 0.0553 0.0553 Table 4 Event study date analysis. index/year event 1 (2014) event [2.sup.-] event [2.sup.+] (2016) (2016) IBOV 2014-09-29 2016-02-02 2016-03-17 ICON 2014-12-01 2016-12-01 2016-01-29 INDX 2014-12-01 2016-12-01 2016-01-29 IMOB 2014-12-01 2016-11-10 2016-03-17 IFNC 2014-09-29 2016-03-15 2016-03-17 IMAT 2014-12-01 2016-03-08 2016-04-12 IEEX 2014-02-17 2016-11-10 2016-01-29 IUTIL 2014-10-23 2016-11-10 2016-01-29 index/year event 3 (2017) IBOV 2017-05-18 ICON 2017-05-18 INDX 2017-05-18 IMOB 2017-05-18 IFNC 2017-05-18 IMAT 2017-03-21 IEEX 2017-05-18 IUTIL 2017-05-18 Table 5 Parameters descriptive statistics. IBOV ICON INDX NGARCH Par mean sd mean sd mean sd [[gamma].sub.0] 0.010 0.027 0.005 0.019 0.011 0.038 [[gamma].sub.1] 0.008 0.019 0.008 0.0195 0.010 0.026 [[beta].sub.1] 0.187 0.374 0.188 0.377 0.164 0.347 tGARCH [[gamma].sub.0] 0.009 0.023 0.005 0.016 0.013 0.042 [[gamma].sub.1] 0.008 0.018 0.008 0.018 0.010 0.024 [[beta].sub.1] 0.188 0.375 0.188 0.376 0.163 0.345 shape 9.400 25.001 5.946 18.83 2.421 5.319 teGARCH [[gamma].sub.0] 0.005 0.037 0.000 0.008 0.000 0.001 [[gamma].sub.1] -0.014 0.031 -0.016 0.043 -0.016 0.036 [[beta].sub.1] 0.191 0.386 0.197 0.393 0.180 0.378 [[alpha].sub.1] 0.001 0.041 -0.004 0.109 0.000 0.033 shape 11.889 29.389 7.958 21.562 4.523 12.450 IFNC IMAT IEEX NGARCH [[gamma].sub.0] 0.011 0.035 0.037 0.113 0.043 0.138 [[gamma].sub.1] 0.009 0.023 0.011 0.027 0.023 0.066 [[beta].sub.1] 0.183 0.371 0.182 0.362 0.144 0.318 tGARCH [[gamma].sub.0] 0.012 0.032 0.033 0.098 0.028 0.101 [[gamma].sub.1] 0.008 0.022 0.010 0.025 0.015 0.042 [[beta].sub.1] 0.183 0.372 0.184 0.366 0.162 0.341 shape 7.552 22.611 4.694 14.091 2.651 6.162 teGARCH [[gamma].sub.0] 0.017 0.098 0.012 0.054 0.013 0.048 [[gamma].sub.1] -0.006 0.025 -0.001 0.030 -0.010 0.043 [[beta].sub.1] 0.160 0.385 0.193 0.385 0.163 0.342 [[alpha].sub.1] 0.007 0.060 0.008 0.050 0.027 0.089 shape 9.413 26.595 5.290 15.248 4.419 13.271 IMOB NGARCH Par mean sd [[gamma].sub.0] 0.033 0.139 [[gamma].sub.1] 0.014 0.034 [[beta].sub.1] 0.145 0.322 tGARCH [[gamma].sub.0] 0.035 0.152 [[gamma].sub.1] 0.015 0.035 [[beta].sub.1] 0.144 0.320 shape 2.811 7.518 teGARCH [[gamma].sub.0] 0.017 0.071 [[gamma].sub.1] -0.007 0.018 [[beta].sub.1] 0.154 0.343 [[alpha].sub.1] 0.026 0.067 shape 2.641 6.901 IUTIL NGARCH [[gamma].sub.0] 0.042 0.132 [[gamma].sub.1] 0.022 0.069 [[beta].sub.1] 0.146 0.319 tGARCH [[gamma].sub.0] 0.033 0.108 [[gamma].sub.1] 0.018 0.055 [[beta].sub.1] 0.155 0.332 shape 3.791 9.915 teGARCH [[gamma].sub.0] 0.017 0.064 [[gamma].sub.1] -0.009 0.043 [[beta].sub.1] 0.162 0.341 [[alpha].sub.1] 0.027 0.085 shape 5.326 15.222 Table 6 Backtesting results for the sectoral indices. The p-values of the (Christoffersen, 1998) (LR-statistics) and (Candelon et al., 2010) (J-statistics) are indicated at the corresponding columns. The tests were made by considering a 5% nominal level. IBOV (WE = 320) VaR method Mean VR [LR.sub.UC] [LR.sub.IND] 1% NGARCH -0.035 0.811 0.129 0.753 1% tGARCH -0.036 0.811 0.129 0.753 1% teGARCH -0.036 1.419 0.796 0.581 99% NGARCH 0.035 1.419 0.796 0.581 99% tGARCH 0.036 1.217 0.765 0.636 99% teGARCH 0.036 1.318 0.986 0.609 ICON (WE = 320) 1% NGARCH -0.027 1.825 0.193 0.477 1% tGARCH -0.028 1.521 0.598 0.554 1% teGARCH -0.026 2.231 0.023 0.053 99% NGARCH 0.027 1.115 0.555 0.665 99% tGARCH 0.028 0.912 0.231 0.723 99% teGARCH 0.026 1.521 0.598 0.554 INDX (WE = 420) 1% NGARCH -0.025 1.354 0.765 0.636 1% tGARCH -0.026 1.015 0.231 0.723 1% teGARCH -0.025 1.580 0.796 0.581 99% NGARCH 0.025 1.128 0.374 0.694 99% tGARCH 0.026 0.902 0.129 0.753 99% teGARCH 0.025 0.790 0.064 0.783 IMOB (WE = 480) 1% NGARCH -0.036 1.694 0.796 0.138 1% tGARCH -0.039 1.210 0.374 0.694 1% teGARCH -0.039 1.331 0.555 0.665 99% NGARCH 0.036 1.815 0.598 0.554 99% tGARCH 0.039 1.694 0.796 0.581 99% teGARCH 0.039 1.573 0.986 0.609 IFNC (WE = 350) 1% NGARCH -0.040 0.941 0.231 0.723 1% tGARCH -0.042 0.732 0.064 0.783 1% teGARCH -.0.42 1.046 0.374 0.694 99% NGARCH 0.040 1.464 0.796 0.581 99% tGARCH 0.042 1.359 0.986 0.609 99% teGARCH 0.042 1.673 0.429 0.013 IMAT (WE = 300) 1% NGARCH -0.042 1.093 0.555 0.665 1% tGARCH -0.044 0.894 0.231 0.723 1% teGARCH -0.043 0.994 0.374 0.694 99% NGARCH 0.042 1.988 0.073 0.310 99% tGARCH 0.044 1.590 0.429 0.528 99% teGARCH 0.043 2.186 0.023 0.385 IEEX (WE = 360) 1% NGARCH -0.032 1.797 0.295 0.215 1% tGARCH -0.034 1.479 0.796 0.138 1% teGARCH -0.033 1.691 0.429 0.528 99% NGARCH 0.032 1.403 0.986 0.609 99% tGARCH 0.034 0.863 0.129 0.753 99% teGARCH 0.033 1.295 0.765 0.636 IUTIL (WE = 380) 1% NGARCH -0.034 1.727 0.429 0.528 1% tGARCH -0.036 1.619 0.598 0.554 1% teGARCH -0.036 1.727 0.429 0.528 99% NGARCH 0.034 1.403 0.986 0.609 99% tGARCH 0.036 1.295 0.765 0.636 99% teGARCH 0.036 1.295 0.765 0.636 IBOV (WE = 320) VaR method [LR.sub.CC] [J.sub.UC] [J.sub.IND1] [J.sub.IND2] 1% NGARCH 0.301 0.975 0.289 0.436 1% tGARCH 0.301 0.975 0.289 0.436 1% teGARCH 0.830 0.967 0.834 0.801 99% NGARCH 0.830 0.762 0.831 0.974 99% tGARCH 0.855 0.701 0.737 0.889 99% teGARCH 0.877 0.689 0.545 0.773 ICON (WE = 320) 1% NGARCH 0.334 0.963 0.917 0.990 1% tGARCH 0.731 0.966 0.784 0.946 1% teGARCH 0.011 0.959 0.0126 0.059 99% NGARCH 0.765 0.947 0.994 0.992 99% tGARCH 0.459 0.952 0.792 0.922 99% teGARCH 0.731 0.952 0.269 0.608 INDX (WE = 420) 1% NGARCH 0.855 0.970 0.523 0.825 1% tGARCH 0.459 0.974 0.977 0.998 1% teGARCH 0.830 0.967 0.267 0.488 99% NGARCH 0.624 0.949 0.904 0.959 99% tGARCH 0.301 0.819 0.973 0.993 99% teGARCH 0.174 0.511 0.647 0.882 IMOB (WE = 480) 1% NGARCH 0.322 0.972 0.004 0.003 1% tGARCH 0.624 0.972 0.315 0.467 1% teGARCH 0.765 0.971 0.214 0.161 99% NGARCH 0.731 0.770 0.074 0.073 99% tGARCH 0.830 0.777 0.138 0.196 99% teGARCH 0.877 0.785 0.022 0.054 IFNC (WE = 350) 1% NGARCH 0.459 0.974 0.491 0.614 1% tGARCH 0.174 0.977 0.931 0.990 1% teGARCH 0.624 0.972 0.224 0.361 99% NGARCH 0.830 0.919 0.051 0.128 99% tGARCH 0.877 0.922 0.029 0.093 99% teGARCH 0.034 0.914 0.000 0.005 IMAT (WE = 300) 1% NGARCH 0.765 0.892 0.781 0.971 1% tGARCH 0.459 0.902 0.407 0.719 1% teGARCH 0.624 0.897 0.857 0.972 99% NGARCH 0.120 0.626 0.231 0.549 99% tGARCH 0.600 0.663 0.332 0.622 99% teGARCH 0.052 0.609 0.034 0.143 IEEX (WE = 360) 1% NGARCH 0.268 0.964 0.093 0.261 1% tGARCH 0.322 0.967 0.283 0.570 1% teGARCH 0.600 0.965 0.129 0.134 99% NGARCH 0.877 0.785 0.540 0.854 99% tGARCH 0.301 0.831 0.953 0.997 99% teGARCH 0.855 0.793 0.789 0.966 IUTIL (WE = 380) 1% NGARCH 0.600 0.965 0.114 0.137 1% tGARCH 0.731 0.966 0.176 0.331 1% teGARCH 0.600 0.965 0.101 0.154 99% NGARCH 0.877 0.785 0.568 0.857 99% tGARCH 0.855 0.793 0.757 0.954 99% teGARCH 0.855 0.793 0.760 0.954

Printer friendly Cite/link Email Feedback | |

Title Annotation: | texto en ingles |
---|---|

Author: | Bernardino, Wilton; Brito, Leonardo; Ospina, Raydonal; Melo, Silvio |

Publication: | Revista Brasileira de Financas |

Article Type: | Ensayo |

Date: | Oct 1, 2018 |

Words: | 12142 |

Previous Article: | A Statistical Factor Asset Pricing Model Versus the 4-Factor Model/Modelo Estatistico de Aprecamento de Ativos Versus o Modelo de Quatro Fatores. |

Next Article: | Evidences of Dividend Month Premium in the Brazilian Stock Market/Evidencias do Premio do Mes do Dividendo no Mercado Acionario Brasileiro. |

Topics: |