Printer Friendly

Price forecasting through multivariate spectral analysis: Evidence for commodities of BM&Fbovespa.

ABSTRACT

This study aimed to forecast the prices of a group of commodities through the multivariate spectral analysis model and compare them with those obtained by classical forecasting and neural network models. The choice of commodities such as ethanol, cattle, corn, coffee and soy was due to the emphasis in the exports in 2013. The multivariate spectral model has proved to be suitable, when compared with others, by enabling a better predictive performance. The results obtained in the out-of-sample period, through the use of measurement error and statistical test, confirm this. This research may help market professionals in formulating and implementing policies targeted to the agricultural sector due to the relevance of price forecast as a planning instrument and analysis of the finance market behavior for those who need protection against price fluctuations.

Keywords: Spectrum analysis. Forecast. Commodities.

1 INTRODUCTION

Over recent years, Brazilian agriculture has developed itself standing out as an activity of high economic and social value with strong growth trend (CAMPOS, 2007). The development of domestic and foreign markets contributed to the dynamic character of agriculture, incorporating productive technologies to meet the requirements of these markets. For Oranje (2003), the productivity in agriculture may be observed by means of price competitiveness, which is explained by importers in choosing between different exporters, those whose prices are not high.

Thus, the price analysis is of singular importance for the participants of the agricultural market, whether they are buyers, sellers or people who need protection from price fluctuations. Decision taken by producers, even before the actual harvest, assume knowledge on the behavior of prices (RIBEIRO;SOSNOSKI; OLIVEIRA, 2010). Additionally, decisions relating to the production and adoption of funding policies as an alternative to ensure prices are based on trends.

As agricultural activities are characterized for presenting cyclical movements, suffering the influence of various market factors and showing high fluctuations, as described by Oliveira and Aguiar (2003), this ends up being a hindrance to its predictability. Therefore, price forecasting becomes one of the main obstacles to the implementation of the planning and evaluation of activities. Price forecasting has increasingly been the object of interest of practitioners and scholars, as through it, it is possible to reduce uncertainty in decision making for those who trade in the market.

Schwager (1995) explains that by forecasting we can assist those involved in agricultural commodities (standardized goods with low added value) markets, both for hedgers, who deal with goods' physical transaction, seeking to protection from future price fluctuations, as for speculators who want to take risks due to possible gains. Thus, all participants in the commodity market need information on prices, which are fundamental to the profitability of their activities.

Research (Bressan, 2004; Lima, Gois and Ulises, 2007; Sobreiro et al., 2008; Aredes and Pereira, 2008; Lima et al., 2010; Ceretta, Righi and Schlender, 2010; Ferreiraet al., 2011; Miranda, Coronel and Vieira, 2013; Tibulo and Carli, 2014) on agricultural commodities make use of prediction models to evaluate the behavior of prices from the use of the commodity data being analyzed. Therefore, they use structured models to perform the commodity price forecasting based on the time series being studied.

In the study by Aguiar and Borestein (2012), the authors argue about the importance of using other data that can influence the price of the commodity in question, in order to monitor its price fluctuations. From this perspective, the research performs a price forecast of a group of agricultural commodities, through the Multivariate Singular Spectrum Analysis (M-SSA) model, due to the relevance of the information on the behavior of prices that are fundamental for those who trade in the agricultural market. Then, to evaluate the predictive performance, the results obtained in the research are compared with models already used in price forecasting for agricultural commodities.

The choice for the M-SSA model is due to a couple of reasons: i) the model captures time series structures representing the comprehensive behavior of the time series taking into account the effects of the analyzed set and ii) empirical evidence (Patterson et al., 2011, Hassani and Mahmoudvand, 2013) suggest that projections based on the M-SSA present better performance when compared to those from models that only consider time series in analysis.

The article is organized into five sections. In the next section we present the theoretical framework of models applied for the time series with seasonality. Section 3 describes the methodology and forecasting models used in the research. In section 4, data and the tests results for normality, multivariate normality, linearity, stationarity, predictive tests and empirical results are presented. The conclusions and suggestions for future research are set out in section 5.

2 THEORETICAL FRAMEWORK

In the literature on time series it is possible to distinguish two classic modeling strategies. The first one refers to the exponential smoothing models while the second features the Box-Jenkins methodology. Exponential smoothing models, also defined as smoothing or damping models, are techniques developed for a specific purpose and do not require probabilistic foundation. Using the weights distribution idea during the period, in order to consider weighting variants in time. Among the exponential smoothing models, the seasonal algorithm Holt-Winters(HW) is recommended for standard time series with more complex behavior, that in addition to presenting seasonality, consider trend and noise (MORETTIN; TOLOI, 2006).

Considering that, through smoothing a process of moving averages is unintuitive to represent the behavior of a particular time series and, whereas the application of autoregressive models is common in different fields of knowledge, we can use autoregressive and moving average terms simultaneously in accordance with the objective of improvement. Thus, this combination characterizes the model defined by the literature as Autoregressive Moving Average Model (ARMA).

Another possibility is to make the time series stationary through a differentiation process, that is, to take successive differences of the original time series. Hence, we begin the formation of the Autoregressive Integrated Moving Average (ARIMA) model. This model is based on the construction of methods adjusted in their probabilistic properties.

In some situations, time series may exhibit periodic fluctuations, like the meteorological phenomena that, when evaluated on a quarterly basis, often have higher correlations when lags multiple of four are used, in accordance with the seasons of the year or in economic data that require lags multiple of twelve, in accordance with the months of the year (ESQUIVEL, 2012). Thus, it is appropriate to consider a stochastic periodicity to evaluate time series behavior. When the ARIMA model takes into account this periodicity it becomes known as Seasonal Autoregressive Integrated Moving Average (SARIMA).

Given the restrictions ARIMA model to maintain the constant error variance over time, Engle (1982) suggested a model for forecasting. Defined as the Autoregressive Conditional Hetoreskedasticity (ARCH), this model introduces the error's conditional variance determined by the lag squared errors. The idea is to be able to measure the persistence of shocks to the variance by a coefficient. The closer it is to the unit, the indication is that shock impacts on prices take time to dissipate. Another possibility is given by the generalized ARCH model or Generalized Autoregressive Conditional Heterocedasticity (GARCH) proposed by Bollerslev (1987).

If in the case of the classical models described, the sign (trend and periodicity) of the time series is studied in terms of units of time, for the spectral models the extraction of time series information is performed in terms of frequency units. The basis of spectral models lies in the fact that any function in time can be defined by the superposition of sine waves of different frequencies. In the literature, spectral models, when compared to classical models, decompose time series in various components with features of simpler periodicity, featuring advantages on the elimination of noise from the original series, according to Marques and Antunes (2009), who, just as Vityazev, Miller and Prudnikova (2010), investigate the structure of time series in more detail compared to the Transformed ones by Fourrier and Wavelet.

In addition to these predictive models, another one that does not require parameters of time series analysis is the Artificial Neural Networks (ANN) model, which through automatic catches to approximate equations without having to deduce them. In addition to not requiring series parameters, the model differs from the classical and exponential smoothing forecast models for being a model that operates with learning algorithm. Such an algorithm seeks to imitate the interconnection structure of the human brain, with the purpose of incorporating the pattern of a time series behavior in order to efficiently provide, future values (TURBAN, 1993).

The construction of the ANN model involves from the appropriate neural network modeling to the transformations used to transmit data to the network and the methods used to interpret the results. These aspects are given by modeling, processing and interpretation and are critical in the use of the model to perform forecast.

2.1 FORECASTING PERFORMANCE OF CLASSIC AND SPECTRAL MODELS

The research conducted by Hassani (2007)--when comparing forecasting results between the spectral analysis model, whose analysis is univariate, and some classic models--we verified that the spectral model presented better performance. The author, in addition to using the spectrum method, uses the model of moving averages, the ARIMA model and the HW seasonal algorithm, which were also employed by Brockwell and Davis (2002), to forecast the accidental death time series in the United States in the seventies. The research revealed that the spectral model generated more accurate predictions than those obtained by the classical models.

Still on the spectral model, Menezes et al. (2014), when confronting forecast data results for the electricity consumption of a distributor that serves part of the state of Rio de Janeiro, confirmed improved performance of spectral analysis in relation to the ARMA and the HW seasonal algorithm. Esquivel (2012), when using meteorological and financial time data, whose series have different characteristics, concluded that the spectral model from univariate analysis produced forecast results as good as or superior to those obtained by the SARIMA model and the HW seasonal algorithm.

In another research, Hassani, Heravi and Zhigljavsky (2009) used the ARIMA model and the HW seasonal model to predict eight indexes for industrial production in Germany, France and the United Kingdom. They thus, demonstrated the best performance for forecasts obtained by the M-SSA model when compared to those obtained by classic models. Patterson et al., (2011), by using data on monthly indices of industrial production in the UK, the authors found that both the spectral model of univariate analysis as the M-SSA model showed better prediction performance compared with ARMA.

The same conclusion on the forecast performance of the univariate and multivariate spectral models in relation to the ARIMA model and the HW seasonal algorithm is obtained by Hassani Heravi and Zhigljavsky (2009) when investigating time series of electricity and gas consumption in Germany, France and the United Kingdom. By using the same series, Hassani and Mahmoudvand (2013) demonstrate that the M-SSA model showed better performances when compared with the univariate spectral model.

Regarding research on agricultural commodities, to perform price forecasts for cattle, coffee and soybeans, Bressan (2004) made use of the classical ARIMA model and the ANN model. The results presented by the author indicate gains in most of the examined contracts, demonstrating the potential use of models as a decision tool in negotiations, notably, operations based on the forecasts obtained by the ARIMA model.

Lima, Gois and Ulises(2007) conducted forecasts for sugar prices, coffee, live cattle, corn and soybeans using the ARMA and ARIMA models. The predictive power of each model was compared, and for the authors in most situations the ARIMA model showed better predictive power.

To evaluate the potential use of predictive models for time series of wheat prices, in the state of Parana, Aredes and Pereira (2008) uses the ARIMA, SARIMA and ARCH models evaluating their forecasting capabilities. According to the results obtained by the authors, all models were effective in wheat price forecasting, as the proposed prices were similar to the observed prices.

For the Soybean commodity, Lima eta 1. (2010) investigated the behavior of prices based on the model ARIMA-GARCH and ANN, explaining that the price forecast results were particularly satisfactory. Ceretta, Righi and Schlender (2010) compared the ARIMA with the ANN model, applied to soybean prices time series, concluding that there is no performance superiority between predictive models. To evaluate the application of the ANN model Sobreiro et al. (2008) use the prices of sugar commodity. The results show that the application has obtained a significant approximation compared to the actual prices, which for the authors highlights the importance of the model as an alternative to estimate prices.

Ferreira et al. (2011) conduct research to forecast the commodities price of soybean, live cattle, corn and wheat based on the ANN model. The results obtained by the authors show the possibility of using neural networks as a pricing strategy. Still on the ANN model used for price forecasting of coffee, Miranda, Coronel and Vieira (2013), when evaluating the predictions performed, concluded that the model when compared with ARMA proved to be effective in the coffee price forecast, since the proposed prices were close to those observed.

For the forecast of corn, Tibulo and Carli (2014) make use of the ARIMA model and the HW seasonal algorithm. The authors concluded that the additive HW seasonal algorithm, presented better results for the corn price forecasting compared to the ARIMA model.

This research on price forecasting for agricultural commodities have different methodologies to identify time series patterns and to perform forecasting. However, when investigating the behavior of the time series being analyzed, the mentioned models do not take into account the time series of prices of other commodities, failing to evaluate the comprehensive series behavior and the effects of the set. This ends up justifying the application of the M-SSA model due to the fact that itself proposes to represent the comprehensive behavior of the time series analysis.

3 METHODOLOGY

In order to assess the contribution of the M-SSA model in price forecasting, in addition to exploring its application in time series for agricultural commodities, the research will use the HW seasonal algorithm, the SARIMA model, ARIMA-GARCH and the ANN model. To identify the characteristics of time series, statistical tests for normality will be applied, according to Anderson-Darling (AD) and Shapiro-Wilk (SW), in addition to Doornik-Hansen-Omnibus (DHO) for multivariate normality. In the research we also use tests by McLeod and Li (1983) and Tsay (1986) for linearity and the Augmented Dickey-Fuller (ADF) and Kwiatkowski-Phillips-Schmidt-Shin (KPSS) to analyze the stationarity of the series.

An important aspect of the research is being able to compare the models' predictions, in the out-of-sample period and evaluate their performance through the measures of Mean Square Error (MSE) and Cumulative Root Mean Square (CRMS), in addition to verifying the predictive significance of the predictions made. This verification is obtained through the Statistical test proposed by Diebold and Mariano (1995), defined in this study as DM test. Bellow, we describe the forecasting models we used, in addition to the error measurements.

3.1 M-SSA MODEL

Early research was conducted by using atmospheric data. For this purpose, time series were associated with the climate and represented by localities or regions on a map (KEPPENNE; GHIL, 1993, PLAUT; VAUTARD, 1994). Similar to the spectral model of univariate analysis, the M-SSA model is defined in two stages: decomposition and reconstruction. The stage of decomposition is given by two steps: incorporation and singular value decomposition. The incorporation can be considered as a mapping that transfers a set of one-dimensional time series [??] with i=1,...,M, for a multidimensional matrix [??] with vectors [??], where [??]. Vectors [??] are called lagged vectors.

Similar to the spectral model of univariate analysis, the matrix [X.sup.(i)] is a Hankel matrix which is characterized by its constant inputs along the diagonal parallel to the secondary diagonal. In this step, given a M set of time series, with t=1,...,N, the trajectories matrices are defined [X.sup.(i)], for i = 1,...,M in each time series [??], considering that the trajectory matrix is a result of the lagged vectors. The result of the incorporation, as described by Hassani and Mahmoudvand (2013), It is the formation of a block trajectories matrices [X.sub.v], according to:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1)

In the second step, defined as singular value decomposition, decomposition matrix [??] is performed obtaining a sum of elementary matrices. Thus, [??] is denoted by the eigenvalue of [??] in descending order of magnitude [??] and by [??] the orthogonal eigenvectors. The matrix [??] is given according to:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2)

he structure (2) is similar to the matrix of variance-covariance obtained in the classical literature of multivariate statistical analysis. The matrix [X.sup.(i)] [X.sup.(i)(T)] is the same used by the univariate model for a single time series [??]. Similar to that obtained in the spectral model of univariate analysis, the decomposition is represented by:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (3)

where [??] represents the elemental matrix block, [??], the set [??] as eigentriple and [V.sub.D] the position of the matrix block that corresponds to the number of non-zero eigenvalues.

In the reconstruction stage, the clustering for the M-SSA model step corresponds in dividing the blocks of elementary matrices [??] into distinct groups by adding them up in each group. Thus, the unfolding of the set of indexes J = {1,...,D} into disjoint subgroups [I.sub.1],...,[I.sub.M] corresponding to the representation:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (4)

where [??] are defined as a block resulting matrices (HASSANI; MAHMOUDVAND, 2013).

Thus, as a simple case that presents the signal and noise components of the time series, two groups of indices are used, according to [I.sub.1] ={1,...,a} and [I.sub.2] ={a + 1,...,D}, the first and last group associated with component signal and the noise respectively, an integer greater than 1.

The tool which assists in the separation of the components is the graph w-cumulative correlation. Its methodology considers the definition of the w-correlation C(f) cumulative values, as explained by Patterson et al., (2011). Thereby, the w-correlation C(1) is defined with the first eigentriple group as part of the signal subseries [??] and the remaining eigentriple groups for the formation of the noise subseries [??]. The w-correlation C(2) is defined by the first and second groups of eigentriple as part of the signal subseries [??] and the remaining groups for the formation of the noise subseries [??] and so forth.

These w- cumulative correlations are plotted on a graph, as shown in Figure 1, adapted from Patterson et. al., (2011). Thus, the existence of the time series structure is indicated by local maximum and minimum. A typical pattern is a decline for w- cumulative correlations, and that corresponds to a separation of the signal and noise components. We may observe in Figure 1 that the signal subseries [??] will be given by the groups 1-5, and the noise subseries [??] by the groups 6-12, since C(6) indicates a change of this decline.

The forecast obtained from a M group of time series is given for h steps ahead (HASSANI; MAHMOUDVAND, 2013):

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (5)

With [??] representing the first [L.sub.i]--1 components of the vector [??] and [??] the last components of the vector [??], with (i = 1,...M). The matrix [??] is given according to:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (6)

And the matrix W represented by:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (7)

In addition, [??].

3.2 HW SEASONAL ALGORITHM

The incorporation of seasonality in the HW seasonal algorithm can be performed by two different approaches, dependents on the seasonality pattern identified in the series: multiplicative and additive seasonality. When considering the multiplicative seasonality, Morettin and Toloi (2006) explain that time series can be defined by:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (8)

with [N.sub.t] the level of series, [S.sub.t] the seasonal factor, [m.sub.t] the trend component, [[epsilon].sub.t] the random error in the period t and t=1,...,N.

The form of recurrence for the multiplicative approach, in this research, is given by [HW.sub.m], with the multiplicative seasonal factor represented by the equations involving the three smoothing constant, [alpha], [beta] and [gamma], according to:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (9)

in that 0 < [alpha] < 1, 0 < [beta] < 1 and 0 < [gamma]< 1 are the conditions of the model smoothing constant and represents the number of observations.

The forecasts for future values take into account the steps ahead h; thus, in each equation the seasonal factor considers the corresponding period, according to the following equations:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (10)

For the multiplicative seasonal approach the correction of errors [e.sub.t] is given:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (11)

The other focus of the method, given in this research by HWa, is applied when the series features additive seasonality. Thus, for Morettin and Toloi (2006), by taking the additive seasonal factor, the time series is represented by the sum of all the components according to:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (12)

In the additive seasonality, the form of recurrence is given by the equation:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (13)

with the same conditions of the smoothing constants of the model for the multiplicative approach. The future values are predicted through the equations:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (14)

The procedure of correction of errors for this type of seasonality is now given by:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (15)

3.3 SARIMA MODEL

In some situations it is important to consider the stochastic seasonality to explain the seasonal behavior of the time series. Thus, the recommendation is for one of the variations of the ARIMA model to be used. This is the seasonal ARIMA model or the multiplicative ARIMA model. For Box and Jenkings (1976), the general model represented by ARIMA (p,d,q)x(P,D,Q) can be defined as:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (16)

with [??] (B) the auto regressive operator, [[PHI].sub.P] the stationary seasonal autoregressive polynomial of order F, [DELTA] the difference operator, [mu] the expected value of the series, [theta](B) the moving average operator, [[THETA].sub.Q] the invertible polynomial of seasonal moving averages of order Q and [[epsilon].sub.t] a random error.

The stationary seasonal autoregressive polynomial of order P is given by:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (17)

The invertible polynomial of seasonal moving averages of order Q is given by:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (18)

with the operator seasonal difference of order D represented by:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (19)

In which, in general, the first seasonal differentiation [??] can exclude the seasonality of the time series (ESQUIVEL, 2012).

3.4 GARCH MODEL

The estimation of a model to represent a time series and its forecast may have different treatment from that given in classical models for time series, such as the ARMA model, since they do not reproduce stylized facts: conditional/unconditional non-normality and variance and non-constant variance over time (MORETTIN; TOLOI, 2006).

Proposed by Bollerslev (1987) for the modeling of volatility, the model AR(p)-ARCH(q) can be represented by the equation [??] considering [[epsilon].sub.t] normal and identically distributed. Thus, the alternative model to ARCH is GARCH, in which additional dependencies are allowed on conditional variances lag.

GARCH is a generalization of the ARCH model, in which the conditional variance in the instant t depends not only on past square disturbances, but also of past conditional variances (GUJARATI, 2005). Considering the model GARCH(p,q), the first term p refers to the lag of autoregressive terms, and second, the number of lags in the moving average component.

For the model, the time dependence of the conditional variance is evaluated by an ARMA (p,q) model applied to the square of returns. Thus, estimates of the model parameters that explains the conditional volatility are performed with the use of traditional econometric mechanisms associated with models of the ARMA class. In the research, according to justifications in Lima et al., 2010, we will use the model ARIMA-GARCH, due to the ARIMA model specifying the expression for the conditional variance and to model persistent movements in volatility parsimoniously, and the GARCH model presenting fewer parameters than the ARCH model.

3.5 ANN MODEL

The model is adaptable to the time series and differs from classical forecasting models for being a nonparametric model and involving learning algorithms (LIMA et al., 2010). Simply put, a neural network is a computational structure based on a biological process inspired by the human brain architecture. For Pasquotto (2010), each artificial neuron functions as a unit with autonomy whose goal is to convert an input signal on another output signal. Because neurons are active in the network, the intensity of these signals is amplified or damped according to the parameters that are assigned to synapses, also defined by synaptic weights or simply weights.

3.5.1 The artificial neuron model

Artificial neurons are grouped into three types of layers: the input layer, the intermediate or hidden layer and output layer. For Haykin (2001), the neurons of different layers are connected by synapses which, in turn, are associated with weights or relative importance of each neuron of a layer with the neuron of a subsequent layer. The artificial neuron model shown in Figure 1 adapted from Haykin (2001) is given by various elements

The artificial neuron elements described in Figure 1 are represented by: m, indicating the number of neuron input signals; [x.sub.j] the j-th neuron input signal; [w.sub.gj] the weight associated with the j-th input signal in the neuron g; b the threshold of each neuron also referred to as bias; [v.sub.g] a weighted combination of input signals and the bias of the g-th neuron and [phi](.) as activation function of the g-th neuron.

The bias has the effect of increasing or reducing the inlet activation function according to its signal changes from positive to negative. With a small adjustment, Pasquotto (2010) explains that it is possible to replace the bias [b.sub.g] for a fixed input [x.sub.0]=1 so that the bias becomes a new synaptic weight [w.sub.g0] = [b.sub.g].

Thus, we can be mathematically describe the neuron g by:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (20)

and

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (21)

with [v.sub.g] defined as the induced local field or activation potential and the activation function defining the output [y.sub.g](t) of the g-th neuron in the period t.

Amongst the functions most commonly used in the literature, Pasquotto (2010) cites: i) threshold function which is a discontinuous and binary function; ii) sigmoid function, which is a continuous S-shaped function, varying from 0 to 1 and iii) hyperbolic tangent function, which is a continuous and differentiable function in all its points.

3.5.2 Architecture of neural networks

The architecture for the ANN model goes through changes according to its purpose. The way the neurons are distributed in the network is related to the learning algorithm. The classification given in the literature considers how the processing occurs in the neural network as well as how the neurons are arranged in layers. For Haykin (2001), the single layer, multilayer, fed forward and recurrent classifications are ways to present the architecture of the neural network.

For single-layer architecture, neurons are arranged in parallel in a single layer. In the inputs of this type of network there are nodes that are not neurons, with no sign of computing, with no signal being computed, so that processing is continued only in one layer from which emerge the network outputs. In the multilayer architecture there are layers positioned between the input nodes and the layer responsible for generating the network outputs. The hidden layers also defined as hidden or intermediate layers, propagate signals until they reach the output of the neural network. The architecture of fed forward network processes in one direction, i.e., from input to output, with no feedback. Finally, in the recurring type there is at least one feedback loop. Thus, the output of at least one of the neurons is reintroduced at some earlier point of the network, configuring processing recurrence. When feedback occurs on the same neuron that gave it origin, this type of operation is set to auto-feedback (HAYKIN, 2001).

3.5.3 Types of training

Training for ANN consists in aligning the parameters of the network interactively, requiring a sequence of events, in accordance with: i) stimulation of the neural network by the environment, ii) changes on the weights due to the stimuli and iii) network response in a different way to the environment due to changes. For the neural network, there are two learning patterns (PASQUOTTO, 2010).

3.5.3.1 Supervised learning

This type of learning works by indicating at the network output the correct answer for each situation. There is a set of input data presented to the neural network as examples which generate a net output which is compared with the expected output, thus obtaining the corresponding error.

Considering the neuron g on the output of a network in the instant t, the corresponding error [e.sub.g] will be defined by:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (22)

with [d.sub.g](t) as the neuron's desired response signal g in the instant t and [y.sub.g](t) as output signal of the neuron g in the instant t. The error is used as an interactive parameter of weights adjustment whose intention is to gradually reduce the error to a minimum acceptable value. The back-propagation algorithm is widely used for supervised learning (LIMA et al., 2010).

3.5.3.1.1 Basic back-propagation algorithm

The back-propagation algorithm crosses the error function at the network output looking for a minimum point. The synaptic weights can be altered after crossing two stages: i) forward propagation and ii) back-propagation. In the first stage, the signal is propagated along the network, starting from the first layer until it generates the error in the last layer. In the second stage, the error is corrected layer by layer, by changing the weights in the reverse direction.

In the back-propagation algorithm, the input-output pairs are presented, each of them to the neural network, and there are two ways to apply the correction of errors. In the former, defined as incremental change, the change of weights is performed whenever a new input-output pair is presented to the network generating an error that is corrected individually shortly after each pair being subjected to the network.

Unlike the former way, in the change of batch, all the n pairs are presented and, only after this, the updating of weights is performed, and it may involve the repetitive representation of the same group of pair often times.

3.5.3.2 Unsupervised learning

It is characterized by the absence of a correct answer in the output of the neural network. That is, as there are no vectors of desired responses, there are no comparisons to provide errors. In this learning situation, the neural network is provided with conditions for the implementation of a measure regardless of the task that must be learned, and the free network parameters are used in relation to this measure. For this type of learning, Haykin (2001) explains that there are two ways of conducting it: by reinforcement and in a self-organized manner.

3.6 PERFORMANCE OF FORECASTING EVALUATION

Following the application of the models M-SSA, SARIMA, ARIMA-GARCH and ANN, in addition to the HW seasonal algorithm, it is necessary to evaluate the performance of the obtained forecasts. As the forecasts may have errors, regardless of the adopted model, it is common to evaluate the outcome of the estimates by comparing the obtained values from the original time series and determine its performance by a particular measure. Thus, in the research, forecasts will be compared with 12 weeks following the final week of the sample. For this purpose, For this, the performance evaluation makes use of the MSE measure as defined by:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (23)

with [Y.sub.j] representing the value of the original series, [??] the value of the forecast and h, the number of observations provided and reserved for evaluation. In addition to this measure, the research uses the methodology proposed by Goyal and Welch (2003), given by the difference between the accumulated squared prediction errors of the best subsequent performance model, considering the CRMS given by:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (24)

Whenever such difference is positive, the best subsequent performance model surpasses the best performance one.

Considering two forecasts from a time series [Y.sub.t], and defining [e.sub.it] and [e.sub.jt] as the respective prediction errors, an analysis of the losses associated with each of these forecasts can be done through the DM test, which makes use of a loss function to measure the forecast error, that is, the loss is calculated from the actual and predicted values of the variable in question. Thus, the test verifies whether the differential loss is not significant between two performed predictions.

4 RESULTS AND DISCUSSIONS

4.1 STATISTICAL TESTS APPLIED ON DATA

The choice of the commodities ethanol, cattle, corn, coffee and soy is motivated by the increase in export volumes over the last five years, according to data provided by the Ministry of Agriculture, Livestock and Supply (MAPA) and for the significant role on the agenda of Brazilian exports. In 2013, of total exports, 42% correspond to agribusiness products and, according to MAPA's reports, of this total, these commodities together account for over 70% of the exported volume. In addition, they all have contracts traded on the Stock Exchange, Commodities and Futures Exchange of SAo Paulo (BM&FBOVESPA).

The time series are identified in the research accordingly: ETHA (ethanol), CATT (cattle), CORN (corn), COFF (coffee) and SOYB (soybeans). Prices were obtained from the College of Agriculture Luiz de Queiroz (ESALQ) database and corresponds to the period from 14 November 2008 to 20 December 2013, for which periodicity is weekly.

In order to test whether the sample data originates from a population with a specific distribution, the AD and SW testes are applied. In the survey, two tests allow for a comprehensive overview of the results. As can be seen, from the results shown in Table 1, to a significance at the 5% level, the time series are not normally distributed.

Next, in order to evaluate the aspect of normality of the data set, we use the DHO test, which is a multivariate normality test applied between pairs formed by time series. The results presented in Table 2 indicate that there is strong evidence of multivariate normality to a level of significance of 5% between the pair CATT/COFF. Exceptions occur for other pairs. In the survey, its use is justified for knowledge of the characteristics of the analyzed time series as the M-SSA does not require the normality assumption.

Non-linearity tests by Tsay and McLeode Li are also applied to the time series. For the first test, before its application, data are filtered by an AR model. Table 3 shows the results for the sample data based on the selection criteria Akaike Information Criterion (AIC) to determine the model order in Tsay and, in lags of 5 to 10 weeks, for McLeod and Li. Thus, for a 5% significance level, the time series can be considered linear.

Finally, we perform the DFA and KPSS tests for evaluating the stationarity of the time series. Table 4 shows the results of the two tests. For the first test, the null hypothesis is that the series has a unit root and therefore, it is not stationary. On the second test, the null hypothesis does not have unit root and therefore is stationary. Both tests confirm that the time series are not stationary. In summary, the time series used in the research are not normally distributed, are not stationary, are linear in addition to not presenting, in general, multivariate normality.

In addition to the statistical tests, which indicate characteristics of time series of agricultural commodities, it is also possible to identify the non-stationarity in Figure 2, which shows the behavior of the time series from November 2008 to December 2013, represented by the 267 weeks.

4.2 EMPIRICAL APPLICATIONS

To define the results, we used in the M-SSA model a 16 weeks window length since this value defines the optimal size window. The models HWa and HWm have the choice of the defined smoothing parameter values based in the minimization of the sum of the square error of forecast one step ahead, values which are defined in the statistical software package R, for subsequent application of routine forecast. Regarding the SARIMA model, orders were given based on the criterion of minimization of the values for AIC. For the ARIMA-GARCH model, it is necessary to adjust the ARIMA model at first, based on the same criterion. The procedure is also performed in a routine belonging to the R program. After verifying that the residue did not exhibit autocorrelation structure, present autocorrelation structure, the conditional volatility and forecasting modeling for time series is performed. Finally, the ANN model defined in the same program, is structured with one layer comprised of 7 neurons, assuming maximum number of iterations equal to 100,000 with supervised learning and application of back-propagation algorithm. As forecast proposal, the ANN model applied in the research makes use of the random walk.

In addition to the parameters used in the M-SSA, SARIMA and ARIMA-GARCH models, we can observe in Table 5, in accordance with the study by Esquivel (2012), that when the steps ahead h increases, in general, the performance of forecasts, evaluated by measurement error MSE do not show good results. In the same table, it is clear there are no significant differences in the forecasts made between the HW algorithm with additive and multiplicative seasonality. The forecasts obtained by the M-SSA model, compared with those obtained by the seasonal algorithm HWa and HWm and the models SARIMA, ARIMA-GARCH and ANN, correspond to better performance (MD in Table 5), due to the lower values for the error measurement as described in (23). The exceptions to this are given in the steps ahead h (6, 9 and 12 weeks) on the time series COFF.

Next, in order to assess whether the difference between the MSEs of the best predictive performance model with the best subsequent performance model is statistically significant, we applied the DM test. The results presented in Table 7 indicate that, for the models compared, the null hypothesis in which the difference between the error measures is zero can be rejected for the time series ETHA, CATT, CORN and SOYB. Regarding the series COFF, the null hypothesis in which the difference between the error measures is zero cannot be rejected, with no superiority of models of best prediction performance.

We can conclude, based on the DM statistical test, through the performance of the forecasts carried out for the steps ahead h (3, 6, 9 and 12 weeks), that the M-SSA model aggregates favorable evidence when applied in price forecasting of the investigated commodities.

In Figure 3 we present, through graphs for commodity prices in the weekly period, behaviors of the original time series (solid lines) and predicted time series (dashed lines) obtained with the M-SSA model, since it presented the best performances. Graphs were developed in the period from December 27, 2013 to March 14 2014, out-of-sample. We can observe in the M-SSA model, in that period was able to detect commodities price trend, except for the time series COFF.

5 CONCLUSIONS AND SUGGESTIONS

The analysis of agricultural commodity prices is of singular importance to market participants due to the relevance of the information on the behavior of prices. It turns out that research on commodity price forecasts are given for the behavior of prices from the use of data only for the commodity being studied.

As the dynamics of time series of agricultural commodities shows changes in time, we must ensure that the forecast model is not sensitive to these changes. The motivation for using the M-SSA model in the research takes place due to its ability to capture structures representing the most comprehensive behavior taking into account the effects of the set of time series.

In research by Lima et al. (2010) and Ceretta, Righi and Schlender (2010) in order to investigate the behavior of soybean prices--the former based on the model ARIMA-GARCH / ANN, and the latter comparing the model ARIMA with ANN--, for the authors, the forecast results were favorable to the models ARIMA-GARCH and ANN. Different from research, the results indicate predictive superiority of the M-SSA model.

For the commodities soybeans, cattle and corn, the study by Ferreira et al. (2011) highlights the possibility of using neural networks as a pricing strategy due to the favorable results. Also in relation to these commodities, the study developed by Lima, Gois and Ulises (2007) indicates that the integrated autoregressive model showed better predictive power. These results are not confirmed in the research since for the same commodities the predictive performance indicates superiority of the M-SSA model in relation to autoregressive models and neural networks.

In the evaluation of forecasts for the commodity coffee, Miranda, Coronel and Vieira (2013) concluded that the ANN model, when compared with the ARMA model, demonstrated to be effective in the coffee price forecasting, since the expected prices were close to those observed. For the coffee commodity, the results of predictive performance were favorable to the exponential smoothing model.

Therefore, in the context of research and, except for the coffee, empirical results demonstrate the superiority of the M-SSA model, when compared with the models HWa, HWm, SARIMA, ARIMA-GARCH and ANN, by allowing a greater number of better performance of predictions. The results obtained in the out-of-sample period, through the use of error measures MSE and CRMS, in addition to the MD test for the step ahead h (3, 6, 9 and 12 weeks), confirm this.

Overall, the M-SSA model surpassed in terms of statistics loss, the model that showed the best subsequent performance. Thus, the research, in using M-SSA, made contributions to professionals of the agricultural market in that it added favorable evidence to its use. From a practical point of view, the results may assist in the formulation and implementation of policies directed to the agricultural sector, on account of the relevance of price forecasting as a planning and analysis instrument of the behavior of the trend in commodities prices.

For future research, we suggest the use of other databases, such prices in international markets, the inclusion of other commodities, the adoption of other periods of analysis, in addition to the use of other variables that may increase the explanatory power of the M-SSA model given its multivariable character.

REFERENCES

AGUIAR, S. C. G. E.; BORESTEIN, D. Redes bayesianas: urna ferramenta na previsAo de preco de commodity. Revista de Administracao e Negocios da Amazonia, v. 4, p. 237-253. 2012.

AREDES, A. F.; PEREIRA, M. W. G. Potencialidade da utilizacao de modelos de series temporais na previsAo do preco do trigo no estado do Parana. Revista de Economia Agricola, v.55, p. 63-76, 2008.

BOLLERSLEV, T. A conditional heterokedasticity time series model for speculative process and rates of return. Review of Economics and Statistics, v.69, p.542-547, 1987.

BOX, G. E. P.; JENKINS, G. M. Time series analysis forecasting and control. San Francisco: Holden Day, 1976.

BRESSAN, A. A. Tomada de decisao em futuros agropecuarios com modelos de previsao de series temporais. Revista de Administracao Eletronica, v.3, p. 1-20, 2004.

BROCKWELL, P. J.; DAVIS R. A. Introduction to time series and forecasting, 2. ed. New York: Springer, 2002.

CAMPOS, K. C. Analise da volatilidade de precos de produtos agropecuarios no Brasil. Revista de Economia e Agronegocio, v. 5, p. 303-328, 2007.

CERETTA, P. S.; RIGHI, P. B.; SCHLENDER, S. G. Previsao do preco da soja: urna comparacao entre os modelos ARIMA e redes neurais artificiais. Informacoes Economicas, v. 40, p.15-27, 2010.

DIEBOLD, F.; MARIANO, R. Comparing Predictive Accuracy. Journal of Business and Economic Statistics, v. 13, p. 253-265, 1995.

ENGLE, R. Autoregressive conditional heterokedasticity with estimates of the variances of U.K. inflation. Econometrica, v.50, p.987-1008, 1982.

ESQUIVEL, R. M. Analise espectral singular: modelagens de series temporais atraves de estudos comparativos usando diferentes estrategias de previsao. 2012. 161f. Dissertacao (Mestrado em Modelagem Computacional e Tecnologia Industrial)--Faculdade de Tecnologia SenaiCimatec, Salvador (BA), 2012.

FERREIRA, L. et al. Utilizacao de redes neurais artificiais corno estrategia de previsao de precos no contexto de agronegocio. Revista de Administracao e Inovacao, v. 8, p. 6-26, 2011.

GUJARATI, D.N. Econometria basica. Sao Paulo: Pearson Education do Brasil, 2005.

GOYAL, A.; WELCH, I. Predicting the equity premium with dividend ratios. Management Science, v. 49, p. 639-654, 2003.

HASSANI, H. Singular spectrum analysis: methodology and comparison. Journal of Data Science, v. 5, p. 239-257, 2007.

HASSANI, H.; HERAVI, S.; ZHIGLJAVSKY, A. Forecasting European industrial production with singular spectrum analysis. International Journal of Forecasting, v. 25, p. 103-118, 2009.

HASSANI, H.; MAHMOUDVAND, R. 2013. Multivariate singular spectrum analysis: a general view and new vector forecasting approach. International Journal of Energy and Statistics, v. 1, p. 55-83, 2013.

HAYKIN, S. Redes neurais: principios e praticas. 2. ed. Porto Alegre: Bookman, 2001.

KEPPENNE, C.; GHIL, M. Adaptive filtering and prediction of noisy multivariate signals: An application to subannual variability in atmospheric angular momentum. International Journal of Bifurcation and Chaos, v. 3, p. 625 634, 1993.

LIMA, R.C., GOIS, M. R.; ULISES, C. PrevisAo de precos futuros de Commodities Agricolas com diferenciacoes inteira e fracionaria, e erros heteroscedasticos. Revista de Economia e Sociologia Rural, v. 45, p. 621-644, 2007.

LIMA, F. G. et al. PrevisAo de precos de commodities com modelos ARIMA-GARCH e redes neurais com ondaletas: velhas tecnologias: novos resultados. Revista de Administracao, v. 45, p. 188-202, 2010.

MARQUES, J.; ANTUNES, S. A perigosidade natural da temperatura do ar em Portugal continental: a avaliacao do risco na mortalidade. Territorium 16, p. 49-61, 2009.

MCLEOD, A. I.; LI, W. K. Diagnostic checking ARMA time series models using squared residual autocorrelations. Journal of Time Series Analysis, v. 4, p. 169-176, 1983.

MENEZES, M. L. et al. Modelagem e previsao de demanda de energia com filtragem SSA. Revista de Estatistica UFOP, v. 3, p. 170 187, 2014.

MINISTERIO DA AGRICULTURA, PECUARIA E ABASTECIMENTO. Projecoes do agronegocio 2013/14 a 2025/25. Assessoria de Gestao Estrategica. Brasilia. 2013.

MIRANDA, A. P.; CORONEL, D. A.; VIEIRA, K. M. Previsao do mercado futuro do cafe arabica utilizando redes neurais e metodos econometricos. Revista Estudos do CEPE, v. 38, p. 66-98, 2013.

MORETTIN, P.A.; TOLOI, C.M.C. Analise de series temporais. Sao Paulo: Blucher, 2006.

OLIVEIRA, V. A.; AGUIAR, D.R. Determinantes do desempenho dos contratos futuros de commodities agropecuarios no Brasil. In: CONGRESSO INTERNACIONAL DE ECONOMIA E GESTAO DE REDES AGROALIMENTARES, 2003, Ribeirao Preto (SP). Anais... Sao Paulo: Faculdade de Economia, Administracao e Contabilidade de Ribeirao Preto, 2003.

ORANJE, M. Competitividade das frutas brasileiras no comercio internacional. 2003. 114f. Dissertacao (Mestrado em Economia Aplicada)--Universidade Federal de Vicosa, Vicosa, (MG), 2003.

PASQUOTTO, J. L. D. Previsao de series temporais no varejo brasileiro: uma investigacao comparativa da aplicacao de redes neurais recorrentes de Elman. 2010. 191f. Dissertacao (Mestrado em Administracao)--Faculdade de Economia, Administracao e Contabilidade de Sao Paulo, Universidade de Sao Paulo, Sao Paulo (SP), 2010.

PATTERSON, K. et al. Multivariate singular spectrum analysis for forecasting revisions to real-time data. Journal of Applied Statistics, v. 38, p. 2183-2211, 2011.

PLAUT, G.; VAUTARD, R. Spells of low-frequency oscillations and weather regimes in the Northern Hemisphere. Journal of the Atmospheric Sciences, v. 51, p. 210-236, 1994.

RIBEIRO, C. O.; SOSNOSKI, A. A. K.; OLIVEIRA, S. M. Um modelo hierarquico para previsao de precos de commodities agricolas. Revista Producao On-line, v. 10, p. 719-733, 2010.

SCHWAGER, J. D. Fundamental analysis. New York: John Wiley & Sons, 1995.

SOBREIRO, V. A. et al. Urna estimacao do valor da commodity de acucar usando redes neurais artificiais. Revista P&D em Engenharia de Producao, p. 36-53, 2008.

TIBULO, C.; CARLI, V. Previsao do preco do milho atraves de series temporais. Scientia Plena, v. 10, p. 2-10, 2014.

TSAY, R. 1986. Non-linearity tests for time series. Biometrika, v. 73, p. 461-466, 1986.

TURBAN, E. Decision support and expert systems: management support systems. New York: MacMillan, 1993.

VITYAZEV, V., MILLER, N.; PRUDNIKOVA, E. J. Singular spectrum analysis in astrometry and geodynamics. AIP Conference Proceedings, v. 1283, p. 317-326, 2010.

Carlos Alberto Orge Pinheiro [dagger]

University of State of Bahia - UNEB

Valter de Senna [ohm]

Integrate Campus of Manufacture and Technology

Received on 05/26/2015; Reviewed on 08/21/2015; Accepted on 10/23/2015; Divulgued on 09/05/2016.

*Author for correspondence:

[dagger]. Master in Computational Modeling and Industrial Technology by the Integrate Campus of Manufacture and Technology and PhD student in the same program.

Vinculo: Professor DE in the University os State of Bahia - UNEB.

Endereco: Rua Silveira Martins, 2555, Cabula, Salvador-BA-Brazil. Cep. 41.150-000.

E-mail: carlos.orge@terra.com.br

[ohm] Post- Doctor in Probability and Statistics from the University of Southampton.

Vinculo: Professor in the Programo of Computing Modeling and Industrial Technology of Technology College Senai Cimatec.

Endereco: Av. Orlando Gomes, 1845 - Piata, Salvador--BA--Brazil. Cep. 41.650-010.

E-mail: vsenna@terra.com.br

Table 1--AD's Normality test, SW and p-value

                        ETHA    CATT    CORN    COFF    SOYB

Number of observations  267     267     267     267     267
Shapiro-Wilk              0.96    0.94    0.91    0.92    0.95
p-valor                   0.00    0.00    0.00    0.03    0.00
Anderson-Darling          1.05    2.02    2.95    0.83    1.87
p-value                   0.00    0.00    0.00    0.03    0.00

Source: Developed by authors

Table 2--DHO Multivariate Normality Test and jo-value

         ETHA   CATT    CORN    COFF    SOYB

ETHA            19.06   34.27   22.02   19.49
p-value          0.00    0.00    0.00    0.00
CATT                    27.67    8.35   16.38
p-value                  0.00    0.07    0.00
CORN                            25.71   24.04
p-value                          0.00    0.00
COFF                                    10.99
p-value                                  0.03

Source: Developed by authors

Table 3--p-value for the Tsay and McLeod tests of the First Set of
Real Series

                        order
                        lags   ETHA    CATT    CORN    COFF    SOYB

Number of observations         267     267     267     267     267
Tsay                     2
p-value                          0.27    0.12    0.91    0.71    0.59
McLeod                   5
p-value                          0.84    0.32    0.17    0.35    0.71
McLeod                  10
p-value                          0.89    0.28    0.31    0.81    0.91

Source: Developed by authors

Table 4--DFA and KPSS Tests for Simulated Series and Sets of Real
Series

                      Critical          Critical
Time series   DFA     Value 5%   KPSS   Value 5%

ETHA          -2.81   -3.43      0.22   0.15
CATT          -2.73   -3.43      0.23   0.15
CORN          -2.81   -3.43      0.25   0.15
COFF          -2.90   -3.43      0.26   0.15
SOYB          -2.81   -3.43      0.27   0.15

Source: Developed by authors

Table 5--Forecast Performance by MSE

            Parameters                    MSE
            (p,d,q) (P,D,Q)
Series  L   (p,d,q) (p,q)    h    M-SSA   HWa       HWm      SARIMA

ETHA    16  (1,1,0) (1,0,0)   3  1.4E-05  3.0E-04  3.1E-04  3.2E-04
            (1,1,0) (1,1)     6  9.9E-05  2.2E-04  2.2E-04  8.1E-04
                              9  9.0E-05  4.3E-04  4.4E-04  1.2E-03
                             12  1.3E-04  1.2E-03  1.2E-03  1.2E-03
CATT    16  (1,1,0) (1,0,0)
            (1,1,0) (1,1)     3  3.5E-05  2.5E-02  2.4E-02  2.7E-02
                              6  1.1E-04  3.1E-02  3.0E-02  3.5E-02
                              9  9.7E-05  3.3E-02  3.0E-02  3.8E-02
                             12  1.3E-04  3.3E-02  3.1E-02  3.9E-02
CORN    16  (2,1,0)(1,0,0)
            (2,1,0) (1,1)     3  3.6E-05  8.4E-04  8.1E-04  7.6E-04
                              6  8.5E-05  1.7E-03  1.6E-03  1.3E-04
                              9  8.2E-05  9.6E-03  3.8E-03  2.2E-04
                             12  9.9E-04  3.8E-03  4.0E-03  1.8E-03
COFF    16  (0,1,0)(1,0,0)
            (0,1,0)(1,1)      3  1.4E-04  1.9E-04  2.1E-04  4.8E-04
                              6  6.1E-04  2.4E-04  2.5E-04  2.0E-04
                              9  4.4E-03  4.1E-03  4.2E-03  1.8E-03
                             12  1.4E-02  1.4E-02  1.5E-02  7.9E-03
SOYB    16  (0,1,0) (1,0,0)
            (1,1,0) (1,1)     3  7.2E-05  1.6E-04  1.7E-04  8.7E-04
                              6  4.8E-04  1.7E-03  1.8E-03  3.8E-03
                              9  5.1E-04  1.9E-03  2.1E-03  5.1E-03
                             12  4.0E-04  1.6E-03  1.8E-03  4.9E-03

                  ARIMA GARCH
Series   ANN                   MD

ETHA     3.3E-04  3.8E-04      M-SSA
         7.1E-04  8.5E-04      M-SSA
         3.2E-03  8.2E-04      M-SSA
         4.5E-03  7.1E-03      M-SSA
CATT
         6.7E-02  2.9E-02      M-SSA
         5.5E-02  3.8E-02      M-SSA
         4.9E-02  4.3E-02      M-SSA
         6.7E-02  3.3E-02      M-SSA
CORN
         8.9E-04  9.6E-04      M-SSA
         5.5E-04  8.3E-04      M-SSA
         4.2E-04  4.2E-04      M-SSA
         9.3E-03  8.9E-03      M-SSA
COFF
         6.8E-04  5.1E-04      M-SSA
         3.1E-04  7.0E-04      SARIMA
         5.7E-03  8.8E-03      SARIMA
         8.7E-03  9.3E-02      SARIMA
SOYB
         8.8E-04  7.2E-04      M-SSA
         3.9E-03  3.9E-03      M-SSA
         5.9E-03  5.7E-03      M-SSA
         6.1E-03  6.9E-03      M-SSA

Source: Developed by authors.

Table 6--Forecast Performance by the CRMS Difference

             CRMS
Series  h    M-SSA    HWa      Difference

ETHA
         3   4.2E-05   9.0E-04  -8.6E-04
         6   5.9E-04   1.3E-03  -7.1E-04
         9   8.1E-04   3.9E-03  -3.1E-03
        12   1.5E-03   1.4E-02  -1.3E-02
             M-SSA    HWm      Difference
CATT
         3   1.0E-04   7.4E-01  -7.4E-01
         6   6.6E-04   1.9E-00  -1.9E-00
         9   8.8E-04   2.9E-00  -2.9E-00
        12   1.6E-03   3.9E-00  -3.9E-00
             M-SSA    SARIMA   Difference
CORN
         3   1.1E-04   2.6E-03  -2.5E-03
         6   5.1E-04   7.7E-03  -7.2E-03
         9   7.3E-04   1.9E-02  -1.9E-02
        12   1.0E-02   2.2E-02  -1.0E-02
             SARIMA    HWa      Difference
COFF
         6   1.2E-03   1.5E-03  -2.6E-04
         9   1.6E-02   3.7E-02  -2.1E-02
        12   9.4E-02   1.7E-01  -7.3E-02
             M-SSA    HWa      Difference
SOYB
         3   2.2E-04   4.7E-04  -2.5E-04
         6   2.9E-03   1.0E-02  -7.0E-03
         9   4.6E-03   1.7E-02  -1.3E-02
        12   4.8E-03   1.9E-02  -1.4E-02

Source: Developed by authors

Table 7 - Diebold-Mariano's Test and Compared Models

Series   h    D-M     p-value  Models

ETHA
         3     8.68   0.00     M-SSA, HWa
         6     8.44   0.00     M-SSA, HWa
         9     6.37   0.00     M-SSA, HWa
        12     5.07   0.00     M-SSA, HWa
CATT
         3     4.07   0.00     M-SSA, HWm
         6     5.14   0.00     M-SSA, HWm
         9     8.21   0.00     M-SSA, HWm
        12    11.24   0.00     M-SSA, HWm
CORN
         3     5.20   0.00     M-SSA, SARIMA
         6     8.13   0.00     M-SSA, SARIMA
         9     7.80   0.00     M-SSA, SARIMA
        12     6.34   0.00     M-SSA, SARIMA
COFF
         3     1.15   0.36     M-SSA, HWa
         6     1.22   0.82     SARIMA, HWa
         9     1.85   0.10     SARIMA, HWa
        12     2.72   0.03     SARIMA, HWa
SOYB
         3     8.89   0.00     M-SSA, HWa
         6     4.53   0.00     M-SSA, HWa
         9     4.08   0.00     M-SSA, HWa
        12     4.75   0.00     M-SSA, HWa

Source: Developed by authors
COPYRIGHT 2016 Fucape Business School/ Brazilian Business Review
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2016 Gale, Cengage Learning. All rights reserved.

 
Article Details
Printer friendly Cite/link Email Feedback
Author:Pinheiro, Carlos Alberto Orge; de Senna, Valter
Publication:Brazilian Business Review
Article Type:Report
Date:Sep 1, 2016
Words:9241
Previous Article:Eco-innovation in global hotel chains: Designs, barriers, incentives and motivations.
Next Article:A design management framework for the fashion industry.

Terms of use | Privacy policy | Copyright © 2018 Farlex, Inc. | Feedback | For webmasters