# Falta de credibilidad, persistencia de la inflacion y desinflacion en Colombia.

Lack of Credibility, Inflation Persistence and Disinflation in Colombia *Introduction

Inflation is a persistent process. For long, this fact has called the attention of researchers and policy makers because it has important economic and policy implications. For instance, the central bank's policy response can vary depending on the degree of persistence of inflation shocks. If inflation shocks have short-lived effects, the monetary authority could react mildly to them, and inflation would stabilize soon around a given target, without much impact on credibility and macroeconomic volatility. However, if persistence is erroneously underestimated, delays in response to inflationary shocks could create relatively large deviations from the central bank's objectives, undermine the credibility of the central bank and create additional instability.

Fuhrer (2009), in a comprehensive survey on the persistence of inflation, shows that even though it constitutes a key feature of inflation dynamics, its definition and measurement are controversial. The study provides a "taxonomy" of the body of research on inflation persistence and distinguishes "reduced-form persistence" from "structural persistence". The first refers to an empirical property of inflation without economic interpretation. The second refers to persistence that arises from identified economic structures that produce it. Fuhrer surveys a variety of methodologies to measure inflation persistence, from the estimation of simple autocorrelation functions to sophisticated filtering techniques, and concludes that regardless of how inflation persistence is defined and measured, it has declined somewhat in recent years. However, it is still a subject of debate how much it has declined.

Colombia has not b[kappa]n immune to this debate. The majority of local studies are econometric studies that characterize inflation persistence as some measure of the degree of "mean-reversion" of the inflation rate. To our knowledge, Birchenall (1999) is the first effort in Colombia to characterize inflation dynamics (S[kappa] Table 1). Using data for the period 1965-1996 he finds that the estimate of the autoregressive component of the consumer price inflation is 0.6. Thus, the study characterized inflation as a persistent stationary process. More than ten years later, Capistran and Ramos-Francia (2009) use a similar approach (the sum of the estimated coefficients of an autoregressive process) and find similar results for Colombia (0.67), but using a larger sample for the 10 largest Latin American economies, albeit for a shorter period (2000-2006).

More recently, Echavarria, Lopez, and Misas (2010a) and Echavarria, Rodriguez, and Rojas (2010b), following the recent international literature, measure inflation persistence with the relative contributions of the permanent and transitory components of inflation; they argue that the adoption of inflation targeting in 1999 caused an important reduction in mean and variance, but has not significantly modified the persistence of inflation.

Ind[kappa]d, Colombia used to be a country of high and volatile inflation. At the beginning of the nineties the newly independent central bank (Banco de la Republica), began announcing end of year inflation targets along with other explicit targets for other macroeconomic variables, like the nominal exchange rate (1). These targets should guide monetary policy in order to m[kappa]t the Constitutional mandate of achieving price stability. Although no long-run inflation target was set, central bank officials publicly claimed that their goal was to reduce inflation to a "single digit". In 1991, in line with other central banks around the world, Banco de la Republica established its first quantitative inflation target of 22%.

The disinflation process was long and gradual. The bank missed its inflation target for 6 years in a row. It was until 1999, in the midst of a mayor financial and economic crisis, that inflation reached a single digit. Later, inflation declined steadily (but slowly) from about 9% in 1999 to about 5% in 2005. Nowadays, after the 2007-2008 global financial crisis, Colombian inflation is under control and within a 2-4% inflation target range. With this record at hand in such a prolonged period, one can easily guess that credibility has not b[kappa]n one of the main assets of the central bank (s[kappa] table 2). Thus, it is natural to think that lack of credibility may be one factor behind the persistence of inflation.

The hypothesis that lack of credibility on monetary policy is a source of inflation persistence which, in turn, determines the sacrifice ratio is not new. For instance Ball (1994), in a seminal contribution, showed that imperfect credibility can raise the output costs of disinflation. Also, Sargent (1999) argued that the decline of US inflation persistence during the 90's has b[kappa]n associated with an increase in the credibility of the monetary policy, in the sense that inflation expectations have b[kappa]n anchored at a low level and so they are unlikely to adjust to temporary increases in the inflation rate. Erceg and Levin (2003), in another influential paper, studied the episode of the Volcker disinflation in the US using a model in which agents learn about the ultimate intentions of the central bank. The paper calibrates a standard staggered contracts Neokeynesian model to the US economy and finds that the cost of the Volcker disinflation was 1.6 percentage points for each percentage point reduction in the inflation rate (2). This number is similar to other results found in the literature for the US. The authors show that their results are consistent with the idea that most of the inflation persistence found in the US inflation data is attributable to lack of credibility instead of adaptive expectations. Later, Kozicki and Tinsley (2005) examined the financial market implications of shifts in the inflation target. Using a time-series model of the term structure, they showed that failure to account for imperfect policy credibility may explain empirical rejections of the expectations hypothesis of the term structure of interest rates.

This paper asks whether lack of credibility on the central bank's inflation target could have played an important role in explaining the persistence of inflation observed in the Colombian data at business cycle frequencies. To that end, we define the concept of persistence and use an econometric model to measure it. The econometric model is able to capture the low frequency fluctuations of the inflation rate and confirms the results found by Echavarria et al. (2010a) and Echavarria et al. (2010b) that the conventional measure of inflation persistence has not changed significantly in the last decade. Despite this, our results allows us to infer (heuristically) that the importance of persistent shocks to inflation relative to transitory shocks has diminished.

Our main contribution is that we are able to identify, through an economic model, the sources behind inflation persistence. We follow Erceg and Levin (2003) and use their imperfect information model to understand them. We compare it against the conventional Neokeynesian model with inflation indexation. Following Schorfheide (2000), we use Bayesian analysis to discriminate between them. We compute the posterior odds and find that the Colombian data supports the lack of credibility model.

We also use the model to estimate the sp[kappa]d at which agents learn about the ultimate intentions of the Colombian central bank. We find that credibility has b[kappa]n higher after the central implemented the inflation targeting strategy by the end of 1999. In addition, we estimate how conventional estimates of the monetary policy rule changes when the central bank lacks full credibility on its commitment to reduce inflation. To our knowledge, this joint estimation of the monetary policy rule under imperfect credibility is another new result for Colombia.

In light of these results, we proc[kappa]d to calculate the sacrifice ratio implied by the imperfect credibility model. The estimated sacrifice ratio (0.83%) for the full sample is in line with those estimated previously in the literature by Gomez and Julio (2000), Reyes (2003), Sarmiento and Ramirez (2005), but higher than those obtained by Hamann, Julio, Restrepo, and Riascos (2005) for Colombia, and Hofstetter (2007) for the average of 18 Latin American countries in a 30-years sample.

The rest of paper proc[kappa]ds as follows: in the next section we describe the main facts about inflation persistence in Colombia for the period 1990-2010. Then, in section II, we describe the model. In the fourth section we briefly present the estimation procedure. In the fifth section we report the results. In the sixth we compare the two models while in the seventh we compute the sacrifice ratios under imperfect credibility. The last section concludes.

I. Our measure of inflation persistence

Standard measures of persistence such as the sum of the autoregressive coefficients, the spectrum at zero frequency and half life are all concepts that assume convergence to a constant mean (3). Marques (2004) argues that measures of inflation persistence should be based on a time varying mean as it may reflect exogenous factors such as inflation drivers and/or the inflation target. In fact, Levin and Piger (2004), Corvoisier and Moj on (2005) and Ciccarelli and Moj on (2005) show that persistence in some European countries has b[kappa]n stable when computed over small samples or when the mean of inflation is allowed to change. Furthermore, Echavarria et al. (2010a) find similar evidence for Colombia. In these papers inflation persistence is measured as the sum of the autoregressive coefficients in a linear model that allows for breaks in mean. The number of breaks and the break dates are estimated using either Bai and Perron (1988) or Altissimo and Corradi (2003) (4).

In line with this strand of the empirical literature, to characterize the evolving changes in the mean of inflation, we use the following state-space model:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1)

with [v.sub.it] ~ N(0, [EXPRESION MATEMATICA IRREPRODUCIBLE EN ASCII.]), i = 1,2,3 and E[[v.sub.it] [v.sub.js]] = 0 for i [not equal to] j and t [not equal to] s. Equation (1) decomposes the inflation process into a evolving mean component and the fluctuation around it. The latter component is defined by the stationary AR(1) process xt = [[rho].sub.xt-1] + v3t where [rho] [member of] (-1,1) forms a persistence measure. The trend component is given by [[micro].sub.t] and its specification resembles the standard local linear model. The model (1) can be easily estimated using Kalman filter and the standard error decompositions. S[kappa] Harvey (1990), West and Harrisson (1999) and Durbin and Koopman (2001) for details. The advantage of (1) compared to other approaches is that trend and persistence are modeled simultaneously rather that sequentially (5).

We use alternative indexes to measure inflation. One is the percentage change of the seasonally adjusted quarterly Consumer Prices Index (CPI). In addition to CPI inflation, we present results for the following inflation rates: [[pi].sub.CPI_NF] that excludes food from the CPI, [[pi].sub.CPI-T] and [[pi].sub.CPI-NT] that includes only traded goods and non traded goods, [[pi].sub.CPI-R]with only regulated goods and [[pi].sub.CPI-B] that excludes food and regulated prices from the CPI. The sample consists on quarterly data for the period 1988:1 to 2010:4.

We report the results in Table 4 and Figures 2 to 3. These results confirm that most measures of inflation display an important amount of persistence at the business cycle frequency. The only exceptions being the inflation of regulated goods; for which the estimated trend component of inflation follows closely the observed inflation, so deviations from trend quickly revert to the mean, and non-traded sector, which displays a low persistence, compared with the other measures (6). Notwithstanding this, most measures of inflation display high persistence.

To have an indication of how much inflation persistence has changed in the last years, we perform a one-quarter rolling-window estimation of the parameter [rho] in the stationary AR(1) process [x.sub.t] = [[rho]x.sub.t-1] + [v.sub.3t] of [[pi].sub.CPI] starting in the third quarter of 1999 and ending in the fourth quarter of 2010. This period covers the inflation targeting regime. Figure 1 shows the result.

[FIGURE 1 OMITTED]

We confirm the findings in Echavarria et al. (2010a) and Echavarria et al. (2010b), which show that inflation persistence, measured as the component defined by the autoregressive coefficient has remained roughly constant. However, it is worth taking a look at the right panels of Figures 1 to 2. A visual inspection of these graphs show that the relative importance of permanent shocks to CPI inflation has declined (relative to transitory shocks), in particular for the period 2001-2007. Measuring this relative volatility accurately is very difficult. Thus, if we take this graphical analysis as an heuristic measure of persistence, we can cautiously say that inflation persistence has somewhat declined.

In the next section we use a model in which lack of credibility plays a key role in explaining inflation dynamics at the business cycle frequency.

[FIGURE 2 OMITTED]

[FIGURE 3 OMITTED]

II. The model

Following Rabanal and Rubio-Ramirez (2005), we consider a standard closed economy Neokeynesian model commonly used in many central banks. The main elements of the model are: a Phillips curve, an IS curve and a monetary policy rule, described through equations (2) to (4):

[[pi].sub.t] = [[beta]E.sub.t][[pi].sub.t+1] + [[lambda]mc.sub.t] + [u.sub.t] (2)

[[pi].sub.t] = [E.sub.t] [y.sub.t+i] - [sigma] ([i.sub.t] - [E.sub.t] [[pi].sub.t+1] + [E.sub.t] [g.sub.t+i] - [g.sub.t]) (3)

[i.sub.t], = [[gamma].sub.i] [i.sub.t-i] + (1- [[gamma].sub.i]) [[Y.sub.[pi]] ([[pi].sub.t]- [[bar.[pi]].sub.t]) + [[gamma].sub.y] [y.sub.t] ] + [z.sub.t], (4)

where [[pi].sub.t] is the inflation rate, [[bar.[pi]].sub.t], is the inflation target, [mc.sub.t] is the real marginal cost and [i.sub.t] is the nominal interest rate. The variables [g.sub.t] and [z.sub.t] are a preference and a policy shock respectively, which we later describe. An important element of the model is the variable [u.sub.t] which represents the present value of private agents error when forecasting future inflation and distinguishes this model from a standard Neokeynesian model. As we will s[kappa] later, this variable captures the deviation of the Phillips curve under imperfect information with respect to the perfect information. The expectation operator [E.sub.t] denotes the rational expectation of private agents if they use all available information at time

t. The parameters of this set of equations are: [lambda] = (1- [alpha])(1-[beta] [theta])(1-[theta])/ [theta] (1 + [alpha] ([member of] -1))

where [alpha] is the share of labor factor in production, [theta] the probability of k[kappa]ping prices fixed during the period, e the elasticity of substitution between slightly differentiated types of goods and [beta] [member of] (0,1] the discount factor. [lambda] measures the slope of the Phillips curve. [sigma] > 0 is the elasticity of intertemporal substitution and [[gamma].sub.[pi]], [[gamma].sub.y] and [[gamma]].sub.i] measure the degree of responsiveness of the monetary authority to deviations from target, the output gap and past interest rate, respectively.

The rest of equations of the model describe the technology, the marginal cost, the marginal rate of substitution and the real wage:

[y.sub.t] = [a.sub.t] + (1 - [alpha]) [n.sub.t] (5)

[mc.sub.t] = [w.sub.t] + [n.sub.t] - [y.sub.t] (6)

[mrs.sub.t] = 1/[sigma] [y.sub.t] + [[eta]n.sub.t] - [g.sub.t] (7)

[w.sub.t] = [mrs.sub.t] (8)

where [y.sub.t] is output, [a.sub.t] is a productivity shock, [n.sub.t] is the number of hours worked, [w.sub.t] is the real wage per hour and [m.sub.t] is the marginal rate of substitution. Finally, [eta] is the inverse of the elasticity of labor supply to the real wage.

The exogenous variables of the model evolve according to the following stochastic processes:

[a.sub.t] = [[rho].sub.a] [a.sub.t-1] + [member.sup.a.sub.t]

[g.sub.t] = [[rho].sub.a] [g.sub.t-1] + [member.sup.g.sub.t]

[z.sub.t] = [e.sup.i.sub.t]

where each of the innovations [[member.sub.j] follows a normal distribution with zero mean a standard deviation [[sigma].sub.j] for j = a, g, i. We assume that the innovations are uncorrelated with each other.

When information about the inflation target is perfect, [[bar.[pi]].sub.t] is known for all The interesting case is when information is not perfect. As proposed in Erceg and Levin (2003), [[bar.[pi]].sub.t] varies over time due to a combination of a white noise shock, [[member of].sup.q.sub.t] , and a shock [[member of].sup.P.sub.t] with persistent effects on the inflation target. The central bank's reaction function is observable to agents, but the underlying components of the inflation target are not. Therefore, private agents must solve a signal extraction problem to infer the components of [[bar.[pi]].sub.t].

In this model, the central bank's inflation target is the sum of a constant steady-state of inflation [[bar.[pi]] and two zero-mean stochastic autoregressive components, [[pi].p.sup.sub.t] and [[pi].q.sup.sub.t]. The former is assumed to have an autoregressive root close to unity while the later is assumed to have a much smaller autoregressive root. That is, the inflation target evolves according to the following process:

[[bar.[pi]].sub.t] - [bar.[pi]] = ([[pi].sup.p.sub.t] - [bar.[pi]) + [[pi].sup.q.sub.t] (9)

where the time-varying components follow the first order vector autoregression:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (10)

We can write e quations (9) and (10) in state-space form defining [Z.sub.t] = [[[pi].sup.p.sub.t] - [bar[pi]], [[bar.[pi]].sup.q.sub.t]], F = diag([[rho].sub.p],[[rho]].sub.q]) and H = [1,1]. In particular, the state equation

[z.sub.t] = F[Z.sub.t-1] + [[member of].sub.t]

represents equation (10).

Households and firms are assumed to use optimal filtering to solve this signal-extraction problem; this requires the inflation-target innovations [[member of]].sup.p.sub.t] and [[member of]].sup.q.sub.t] to be mutually uncorrelated, with variances [v.sub.p] and [v.sub.q] respectively, and to be uncorrelated with any other shocks in the economy. With these assumptions, the Kalman filter can be used by private agents of the economy to obtain optimal estimates of the unobserved components through the following recursion:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where [K.sub.gain] = F[L.sub.gain] is the Kalman gain matrix and [L.sub.gain] determines how agents respond to the forecast error, [[bar.[pi]].sub.t] - HF[E.sub.t-1] [Z.sub.t-1], by updating their estimates of the underlying components of the inflation target. Therefore, given the current estimate [E.sub.t] [Z.sub.t] of these components, the optimal forecast of the inflation target j periods ahead is given by:

[E.sub.t] [[bar.[pi]].sub.t+j] = [pi] + H[F.sup.J] [E.sub.t] [Z.sub.t]

For simplicity, Erceg and Levin (2003) assume [[pi].sub.q] = 0 and [[pi].sub.p] = 1. Thus, the households and firms expectations of the future inflation target depend only on a constant [bar.[pi]] and the expectation of the highly persistent component of the target. Specifically, the persistent component of the inflation target evolves according to:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

So, agents update their assessment of the persistent component of inflation target by the product of the forecast error innovation and the Kalman gain parameter, [kappa].

Returning to Equation (2), u t is the present value of the forecast error of private agents in the prediction of future inflation, that is:

[u.sub.t] = [beta] ([[pi].sub.t+1] - [E.sub.t] [[pi].sub.t+1]) (12)

where [[pi].sub.t+1] is the rational forecast of inflation at , + 1 given all information available to private agents at time , obtained by the optimal filtering process given by Equation (11). By replacing (12) in (2) we can s[kappa] that in the case of [u.sub.t] = 0, we obtain the standard Neokeynesian Phillips curve, representing the case of perfect information. If there are discrepancies between private agents expectations under imperfect information and perfect information then [u.sub.t] [not equal to]] 0. So, [u.sub.t] will contribute to inflation persistence in the case in which private agents do not have perfect information about the evolution of the inflation target.

We can use Equations (12) and (2)-(4) to derive the following expression for the evolution of [u.sub.t]:

[u.sub.t] = (1 - [kappa].)[u.sub.t-1] + (1 - K.)[member of].sup.c.sub.t]

where e is the Kalman gain parameter from Equation (11) that determines the sp[kappa]d at which agents learn to distinguish between the two components of the inflation target, and [member of].sup.c.sub.t] is a normally distributed zero mean shock with standard deviation [[sigma].sub.c] , associated with the shocks to the permanent component of the inflation target. Notice that in the case of full information, [kappa] = 1, agents learn at the highest possible rate, implying [u.sub.t] =0 and therefore [E.sub.t] [[pi].sub.t+1] = [[pi].sub.t+1].

Of particular interest is the value of the learning parameter /e. We are interested in assessing the information contained in the data about the sp[kappa]d at which agents have learned during the disinflation period using a standard Neokeynesian model augmented by learning about the inflation target. Unlike Erceg and Levin (2003), where parameter /e is calibrated from survey data, we estimate it using this simplified version of their model and Bayesian methods. The advantages of estimating models using a Bayesian approach are discussed formally in Lubik and Schorfheide (2003). A review of the Bayesian tools for macroeconomists is presented in An and Schorfheide (2007).

III. Bayesian estimation

One of the advantages of estimating economic models using a Bayesian approach is that we can incorporate additional information on parameters through the use of priors. To perform the Bayesian estimation of each model we follow Schorfheide (2000) and proc[kappa]d in five steps which we summarize briefly. First, for a given set of parameters, we solve the model using Klein's (2000) method to find the state transition equation. The solution defines the way in which the system evolves around the deterministic steady state. The state-space representation is completed by adding a measurement equation to the model dynamics. The next step consists on computing the likelihood through Kalman filtering and to combine it with the prior distribution of the parameters to get the posterior density. Draws from the posterior density are obtained using the random walk Metropolis-Hastings algorithm as described in Schorfheide (2000). The algorithm is started at the posterior mode or some point nearby with a high probability density, found by numerical optimization. In this section we report the data used in the estimation, our priors and estimation results.

A. Data

We s[kappa]k to explain the behavior of inflation, output, nominal interest rate and real wages. We use quarterly HP-detrended data from 1990:1 to 2010:4. As a proxy of the nominal interest rate we use the interest rate on 90-day certificates of deposits. Our inflation measure is the quarterly (annualized) growth rate of the CPI. Output is measured as the real GDP and real wages are measured using the manufacturing industry real wage index from Banco de la Republica.

B. Priors

Let [d.sub.t] = ([[pi].sub.t] [y.sub.t], [i.sub.t], [w.sub.t]) denote the observed data and define the vector of parameters to be estimated as [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] We impose strong priors on [beta], [alpha] and e. We set [beta] = 0.98, which implies a real annual return close to 4%. To replicate the labor factor compensation share in real GDP we set [alpha] = 0.36. The elasticity of substitution of the good produced by intermediate firms, [member of], is set to 6, which is a standard value in the literature (7).

As explained in Rabanal and Rubio-Ramirez (2005), there is an identification problem in the model between the probability of adjusting prices and the elasticity of substitution. That is, [theta] and e cannot be identified separately. In order to circumvent the identification problem we choose to estimate [theta] for a given markup, since the estimated parameter tells us about the implicit frequency at which firms adjust prices in Colombia. Several studies, including Misas, Lopez, and Parra (2009) and Bonaldi, Gonzalez, and Rodriguez (2010), suggest that Colombian firms set prices every one or two quarters; we choose a beta-prior distribution for [theta] with mean 0.36, which is the value estimated in Bonaldi et al. (2010) baseline model and implies that firms set prices every 4 or 5 months, on average.

The prior distributions for the rest of the parameters in vector <E> are reported in Table 4. For the Taylor rule coefficients we use the priors that are commonly used in the literature: [[gamma]].sub.[pi]] =1.5 and [[gamma]].sub.y] = 0.125. We use a normal distribution for both with standard deviations of 0.25 and 0.125 respectively. There is little evidence about the elasticity of labor supply to the real wage in Colombia; we use the estimates obtained in Prada (2010) as priors for the estimation of the inverse of labor supply elasticity (r) and the inverse of the intertemporal substitution elasticity ([sigma]). Evidence for the US shows that the value of o is higher than 1 but not much larger than 2. However, evidence for emerging markets shows that it should be between 2 and 5. For all autoregressive parameters we use a uniform prior between [0,1) and, for all standard deviations of shocks, an inverse gamma distribution with mean 0.01.

As for the credibility parameters, results for the US suggest that /e is around 0.13. Nevertheless, we use an uniform distribution as a prior which accounts for the lack of evidence regarding the value of this parameter in emerging countries and Colombia, particularly.

IV. Results

We use a Random Walk Metropolis-Hastings algorithm to draw four chains of 200.000 draws from the posterior distribution of O and construct the estimates for each parameter using half the draws of each chain. The acceptance rates for each chain were between 0.3 and 0.4. We use methods developed by Brooks and Gelman (1998) to monitor the convergence of the posterior draws estimates (8). Estimation results are shown in Table 5.

The data supports the idea of lack of credibility as [kappa], the sp[kappa]d at which agents learn in the economy, is different than one and closer to zero. So, we can reject the hypothesis of perfect credibility. Our findings show that the posterior mean of the sp[kappa]d of learning is [kappa] = 0.19, while the estimated probability of k[kappa]ping prices fixed during a quarter is 0.29, lower than the estimate obtained for the full-credibility model (0.37). We argue that this is due to imperfect credibility, which captures the persistence of inflation more closely than the standard model.

For the policy rule parameters, we find a the posterior mean response to inflation of [gamma].sub.[pi]] = 1.97, to the output gap of [gamma].sub.y] = 0.14 and a smoothing desire of [gamma].sub.i]. =0.11. The response to inflation differs from our priors, that were set accordingly to the Taylor principle. Our results show an active central bank when responding to deviations from long run inflation and a passive one when responding to output. This result is in line with recent theoretical developments of Schmitt-Grohe and Uribe (2004). They find that social welfare increases when the central bank only responds to inflation. However Bernal (2002), using a classical approach to estimate Taylor rules in a partial equilibrium model for Colombia, finds that [gamma].sub.[pi]] =1.34, [gamma].sub.] =0.19 and [gamma].sub.i] =0.10. We interpret our results as supporting the idea that, given the lack of credibility on monetary policy during a large part of the sample, the response of the central bank to inflation has to be higher than in environments with higher credibility.

The 90% highest-posterior density interval (HPD) for the intertemporal rate of substitution coefficient ([sigma]) is of (4.27-6.44). The point estimate is 5.38, which lies in the upper end of the estimates obtained for the Emerging Market economies in the International Macroeconomics literature. The high values of the estimates of the coefficient of intertemporal rate of substitution reflect the higher variability of the macroeconomic time series typically found in Emerging Markets.

We also obtain an estimation of the labor supply elasticity. The posterior mean of [eta] is 3.4., implying a labor supply elasticity of 0.29, which is in line with the results of Prada and Rojas (2010), who estimate the Frisch elasticity in Colombia for the period 2001-2006 obtaining a value of 0.31.

The rest of the parameters are the autocorrelation and standard deviations of the shocks (productivity, preferences, monetary policy and the target shock). There is a significant amount of persistence in the productivity and preference shocks, the posterior mean of their autocorrelation coefficients are 0.48 and 0.65, respectively, with a standard deviation of 0.72 percent and 4.86 percent. The posterior mean of the volatility of the interest rate rule shock and the inflation target is 2.8 percent and one percent, respectively. We attribute this high value of the target shock to the period of high and volatile inflation in Colombia that characterized the first part of our sample.

[FIGURE 4 OMITTED]

We perform an additional exercise to assess the impact of inflation targeting on the estimated sp[kappa]d at which agents learn about the inflation target. We split the sample in two periods: one from 1990 to 2000 and the other from 2001 to 2010. The break corresponds (approximately) to the date in which we think inflation targeting was implemented (by the end of 1999).

[FIGURE 5 OMITTED]

The graph shows a faster sp[kappa]d of learning for the post inflation targeting sample. Also notice that the full-sample estimated value of [kappa] is closer to the value obtained for the first sample, the years before inflation targeting. Most of the information in the data appears to be concentrated in that part of the sample. This suggests that there have b[kappa]n credibility gains in the last ten years.

The previous result is interesting and intuitive. We interpret the faster learning rate, measured by a higher Kalman gain, as increased credibility because agents "learn" to distinguish between permanent and transitory shocks at a faster pace. In an inflation targeting regime the central bank announces its inflation target to guide the agents inflation expectations. If there is full-credibility observed inflation converges faster to the inflation target. If there is imperfect credibility agents have to "discover" the movements of the inflation target. Nonetheless, the fact that inflation persistence has fallen, albeit not significantly, suggests that the impact of those gains on inflation has not b[kappa]n large.

[FIGURE 6 OMITTED]

Yet, the model reveals one important fact of the Colombian business cycle. A quick inspection of the smoothed shocks obtained in our estimation, displayed in figure 6, shows that the variability of all shocks is lower since the end of 2000, close to the date in which the inflation targeting regime was implemented (9). In this sense we could argue that the implementation of IT in Colombia is associated with a higher degree of macroeconomic stability. This result is consistent with our finding of the previous section that the relative importance of permanent shocks has diminished in the last few years.

In the next section we compare the lack of credibility model to a standard Neokeynesian model with ad-hoc inflation indexation.

V. Model comparison and inflation persistence

The Neokeynesian model is a standard model in many central banks. To induce inflation persistence many modelers introduce ad-hoc indexation but k[kappa]p the perfect information assumption. We estimate such a model by replacing equation 2 with:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Where [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and v is the degree of price indexation to past inflation (10). We use the same priors for the estimated parameters in order to compare the performance of the imperfect credibility model against the conventional model. Table 4 reports the results of the estimation of the Neokeynesian model with price indexation.

To compare the two models we perform a posterior odds test. The posterior odds ratio is the ratio of the posterior model probabilities. Consider the two models [M.sub.p] and [M.sub.i] with two associated parameters [[theta].sub.p] and [[theta].sub.i] where p refers to the model with perfect information and refers to the model with imperfect information. Both models were estimated using the sample [Y.sub.T]. The fit of each model m = p,i, is given by its marginal density of the data p ([y.sub.t]|[M.sub.m]). We compute the marginal density of the data conditioned on the model:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

by integrating out the parameters [[theta].sub.m] from the posterior kernel. Using Bayes theorem, we can compute the posterior distribution over models as:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where p ([M.sub.m]) is the prior that we have on each of the models. So, the posterior odds ratio is:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

If we had the same prior on each model, the posterior odds ratio is the ratio of the marginal likelihoods:

[F.sub.p,i] = P ([Y.sub.T]|[M.sub.i])/P([Y.sub.t]|[M.sub.p])

also known as the Bayes factor. As the Bayes factor gets larger, the higher the support for model [M.sub.i]

We find that P ([Y.sub.T]|[M.sub.i]) = exp(940.1) while P ([Y.sub.T]|[M.sub.p]) = exp(746.7) implying a log-Bayes factor around 193, so that the odds are strongly in favor of the imperfect credibility model (11).

One interesting result of this exercise is the impact of the imperfect information assumption on the estimation of the degree of stickiness in the economy. According to the literature on inflation persistence, we can identify thr[kappa] sources of inflation persistence: extrinsic, intrinsic and expectations-driven. The first can be associated with the coefficient that accompanies the real marginal cost, [lambda], in the New Keynesian Phillips curve (NKPC), equation (2). The second with the lagged inflation term. The third, in a full information model, with the inflation expectations term, while in an imperfect credibility model we can associate the degree of persistence with parameter [kappa].

To analyze the degree of extrinsic persistence the key parameter is the probability of k[kappa]ping prices fixed during a quarter. Recall that in the imperfect information model this probability is 29 percent, implying price changes, in average, every 1.4 quarters. In the full information model with price indexation this parameter value raises to 1.5 percent implying price changes every 1.54 quarters. It s[kappa]ms that the assumption of price indexation coupled with full information tends to increase the degree of extrinsic persistence by lowering the responsiveness of inflation to the real marginal cost (12).

The degree of intrinsic persistence is similar in both models. Although we did not estimate parameter [beta] in the imperfect credibility model, its value, obtained in the estimation of a full information model, is very similar.

Finally, the expectations-driven persistence on the full information model is closely linked to the intrinsic one through the discount factor, while in the imperfect information model is determined by parameter [kappa]. Our results show that this is our main source of inflation persistence in Colombia. Agents learn at a relatively slow pace. Compared to the US disinflation period, our rate is more than one-fourth times the rate at with agents learn in the US. We could associate this result to the inflation target policy that the central bank follows in Colombia which has allowed anchoring inflation expectations to a long run inflation target.

The estimation of the policy rule is consistent with this idea. The responsiveness of the central bank to deviations of inflation from target is higher under imperfect credibility than under full information and price indexation. This means that the central bank has to exert more effort when it faces a lack of credibility problem than when it faces a price indexation problem. In the next section we use the model to estimate how costly has b[kappa]n to disinflate without anchored expectations.

VI. Disinflation costs

How large have b[kappa]n the disinflation costs under imperfect credibility in Colombia? One way to answer this question is to use the estimated model and compute the "sacrifice" (in terms of the output gap) of reducing the inflation target by 100 bp. We also compute the effort of monetary policy in terms of the increment of the policy rate.

Under full credibility, a central bank can disinflate at little or no output cost. By relaxing this assumption, the disinflation cost depends on the degree of credibility of the monetary authority. In the model, agents learn gradually about the permanence of the target shock. So, the sp[kappa]d at which agents learn provides a natural measure of the degree of credibility of the central bank. If agents learn quickly the disinflation process will resemble the perfect credibility case. The slower the sp[kappa]d of learning the greater the output costs and the effort of monetary policy.

Given the estimated sp[kappa]d of learning, we compute the macroeconomic effects of a 100 bp disinflation focusing on the sacrifice ratio and the monetary policy effort. We measure the sacrifice ratio as the present value of the output gap during the disinflation period. To measure the monetary policy effort we compute the present value of the nominal interest rate gap during the disinflation period. Both present values depend on the discount factor used. In the estimation of the model we used a 0.98 discount. We vary this parameter between 0.95 and 0.99 to check the sensibility of our measurement of the sacrifice ratio to the discount factor used. Table 6 reports the sacrifice ratio and the monetary policy effort under alternative discount factors.

In the benchmark estimation, a 100 basis points shock to the inflation target requires the central bank to k[kappa]p interest rates by nearly 150 bp above average generating a sacrifice of 83 bp in output. The effects of the disinflation shock takes about three to five years to dissipate. In the model, to effectively reduce inflation in 100 bp a shock to the target of equal magnitude is not enough. The central bank n[kappa]ds to exert more effort in order to reduce inflation in 100 bp effectively.

This sacrifice ratio is in line with those estimated previously in the literature by Gomez and Julio (2000), Reyes (2003), Sarmiento and Ramirez (2005), but higher than those obtained Hamann et al. (2005) for Colombia, and Hofstetter (2007) for the average of 18 Latin American countries in a 30-years sample.

[FIGURE 7 OMITTED]

The differences with respect to these last two papers arise from the type of model and estimation techniques used. With respect to Hamann et al. (2005), it is well known that Neokeynesian models without strong ad-hoc persistence (as the one used in that paper) exhibit small sacrifice ratios. With respect to Hofstetter (2007), as the author explains, the negative sacrifice ratios arise from a unique set of conditions that occurred in Latin America during the period 1990-2000.

The sensitivity of our result depends on the discount factor used compute the present value of the cost in terms of output. Using a discount factor of [beta] = 0.95, the disinflation cost falls to 75 bp. Interest rates have to be 134 bp higher on average during the disinflation period.

More importantly, to gauge the impact of the degree of imperfect credibility on the sacrifice ratio, we can use the model to compute the disinflation cost under alternative values of [kappa], the sp[kappa]d of learning parameter. Notice how the larger the [kappa] the smaller the disinflation cost and the lower the effort of the central bank. Previous empirical evidence, like Boschen and Weise (2001), finds that more credible disinflations are associated with smaller sacrifice ratios. Thus, the model used in this paper provides a rationale to explain why higher central bank's credibility leads to lower output losses during disinflations.

[FIGURE 8 OMITTED]

VII. Final remarks

This paper asked whether lack of credibility on the central bank's inflation target could have played an important role in explaining the persistence of inflation observed in the Colombian data at business cycle frequencies. To answer that question we used a state-space model which decomposed the inflation process into a time-varying mean component and the cyclical movement around it. The model was estimated using the Kalman filter; the results obtained confirm the findings by Echavarria et al. (2010a) and Echavarria et al. (2010b) that inflation persistence has not changed significantly in the last decade. Despite this, we argue that our results allows us to infer (heuristically) that the importance of persistent shocks to inflation relative to transitory shocks has diminished. We take this as indirect evidence that inflation persistence has somewhat declined during the disinflation process.

More importantly, we were able to identify, using Erceg and Levin's (2003) model, the sources behind inflation persistence. We estimated and tested that model against the conventional Neokeynesian model with inflation indexation using Bayesian analysis and showed that the posterior odds supports the lack of credibility model.

As a by-product of our estimation, we obtained estimates of the monetary policy rule of the central bank under imperfect credibility on the inflation target. We find that the central bank reacts more to inflation, less to output and smooths less the policy rate in comparison to the standard Neokeynesian model under full-credibility and ad-hoc inflation indexation.

We then obtained estimates of the sp[kappa]d at which agents learn about the ultimate intentions of the Colombian central bank. We found that credibility has b[kappa]n higher after the central bank implemented the inflation targeting strategy by the end of 1999. Thus, we interpret these results as evidence that the credibility of the central bank has increased and has reduced inflation persistence but those gains have b[kappa]n modest. Why this happens is an important open question for future research.

Finally, we calculated the sacrifice ratio implied by the imperfect credibility model and found a sacrifice ratio of 0.83%, in line with those previously estimated in the literature. The model presented here and our estimation techniques could also be used to perform counterfactual experiments for policy analysis, like: what would have b[kappa]n the sacrifice ratio had the central bank's credibility b[kappa]n higher? If more transparency increases the central bank's credibility, policies that increase the bank's transparency could entail smaller losses in aggregate output during disinflation periods. The framework presented here sheds some light to answer and quantify this type of questions. This is a topic for future research.

The model can also be extended and/or adapted along different avenues to answer different questions. One simple modification is to consider other non-observable variables, like potential output. Usually, the economy is hit by productivity shocks that can be transitory or permanent. Agents (as well as economic authorities) face the challenge to learn about these shocks and consequently, take actions. Such an environment should display different outcomes and policy actions than those observed in economies in which agents can distinguish perfectly between transitory and permanent shocks. Another extension is to modify the model to a small open economy model and use it to answer a wide range of policy questions related, for instance, to the duration and/or observability of external shocks and how their nature affect the economic performance of the economy. We also leave these issues for future research.

This paper was received February 10, 2011, modified June 15, 2011 and finally accepted June 22, 2011.

References

1. ALTISSIMO, F., and CORRADI, V. (2003). "Strong rules for detecting the number of breaks in a time series", Journal of Econome tries, 117:207-244.

2. AN, S., and SCHORFHEIDE, F. (2007). "Bayesian analysis of DSGE models", EconometricReviews, 26(2-4):113-172.

3. BAI, J., and PERRON, P. (1988). "Estimating and testing linear model with multiple structural changes", Econome trica, 66:47-78.

4. BALL, L. (1994). "What determines the sacrifice ratio?", in G. Mankiw (Ed.), Monetary policy (pp. 155-182). Chicago, Chicago University Press.

5. BERNAL, R. (2002). "Monetary policy rules in Colombia" (Documento cede 003251). Universidad de los Andes, Facultad de Economia.

6. BIRCHENALL, J. (1999). "La curva de Phillips, la critica de Lucas y la persistencia de la inflacion en Colombia," Archivos de Economia, 102, dnp.

7. BONALDI, P., GONZALEZ, A. y RODRIGUEZ, D. (2010). "Importancia de las rigideces nominales y reales en Colombia: un enfoque de equilibrio general dinamico y estocastico" (Borradores de Economia 591). Banco de la Republica.

8. BOSCHEN, J., and WEISE, C. (2001). "The ex-ante credibility of disinflation policy and the cost of reducing inflation", Journal of Macroeconomics, 23:323-347.

9. BROOKS, S., and GELMAN, A. (1998). "General methods for monitoring convergence of Iterative simulations", Journal of Computational and Graphical Statistics, 7:434-455.

10. BUITER, W. H., and MILLER, M. (1981). "Monetary policy and international competitiveness: The problems of adjustment", Oxford Economic Papers, 33(0):143-175.

11. CAPISTRAN, C., and RAMOS-FRANCIA, M. (2009). "Inflation dynamics in Latin America", Contemporary Economic Policy, 27:349-362.

12. CECCHETTI, S., and RICH, R. (1999). "Structural estimates of the U.S. sacrifice ratio", Research and Market Analysis Group. Federal Reserve Bank of New York.

13. CICCARELLI, M., and MOJON, B. (2010). "Global inflation", The Review of Economics and Statistics, 92(3):524-535.

14. CLARK, P. (1987). "The cyclical component of the U.S. economic activity", Quarterly Journal of Economics, 102:797-814.

15. CLARK, P. (1989). "Trend reversion in real output and unemployment", Journal of Econometrics, 40:15-32.

16. CORVOISIER, S., and MOJON, B. (2005). "Breaks in the mean of inflation: How they happen and what to do with them?" (Working Paper 451). European Central Bank.

17. DURBIN, J., and KOOPMAN, S. J. (2001). Time series analysis by state space methods, Oxford Statistical science series. Oxford, UK, Oxford University Press.

18. ECHAVARRIA, J. J., LOPEZ, E. y MISAS, M. (2010a). "La persistencia estadistica de la inflacion en Colombia" (Borradores de Economia 623). Banco de la Republica.

19. ECHAVARRIA, J. J., RODRIGUEZ, N. y ROJAS, L. E. (2010b). "La meta del Banco Central y la persistencia de la inflacion en Colombia" (Borradores de Economia 633). Banco de la Republica.

20. ERCEG, C., and LEVIN, A. (2003). "Imperfect credibility and inflation persistence", Journal of Monetary Economics, 50:915-944.

21. FUHRER, J. (2009). "Inflation persistence" (Working Paper 09-14). Federal Reserve Bank of Boston.

22. GOMEZ, J., and JULIO, J. M. (2000). "Transmission mechanisms and inflation targeting: The case of Colombia's disinflation" (Borradores de Economia 168). Banco de la Republica.

23. HAMANN, F., JULIO, J. M., RESTREPO, P., and RIASCOS, A. (2005). Costos de la desinflacion en un sistema de metas de inflacion en una economia pequena y abierta. Mexico, Cemla Publications.

24. HARVEY, A. C. (1990). Forecasting, structural time series models and the Kalman filter. Cambridge University Press, Cambridge, UK.

25. HOFSTETTER, M. (2007). "Disinflations in Latin America and the Caribbean: A free lunch?", Journal of Macroeconomics, 30:327-345.

26. KLEIN, P. (2000). "Using the generalized Schur form to solve a multivariate linear rational expectations model", Journal of Economic Dynamics and Control, 24:1405-1423.

27. KOZICKI, S., and TINSLEY, P. (2005). "What do you expect? Imperfect policy credibility and tests of the expectations hypothesis", Journal of Monetary Economics, 52(2):421-447.

28. LEVIN, A. T., and PIGER, J. M. (2004). "Is inflation persistence intrinsic in industrial economies?" (Working Paper 334). European Central Bank.

29. LUBIK, T., and SCHORFHEIDE, F. (2003). "Computing sunspot equilibria in linear rational expectations models", Journal of Economic Dynamics and Control, 28:273-285.

30. MARQUES, C. R. (2004). "Inflation persistence: Facts or arte-facts?" (Working Paper 371). European Central Bank.

31. MISAS, M., LOPEZ, E. y PARRA, J. C. (2009). "La formacion de precios en las empresas colombianas: evidencia a partir de una encuesta directa" (Borradores de Economia 569). Banco de la Republica.

32. PRADA, J. D. y ROJAS, L. E. (2010). La elasticidad de Frisch y la transmision de la politica monetaria en Colombia (chap. 13). Universidad Externado de Colombia, Bogota.

33. RABANAL, P., and RUBIO-RAMIREZ, J. (2005). "Comparing wew keynesian models in the Euro area: A bayesian approach", Journal of Monetary Economics, 52:1151-1166.

34. REYES, J. D. (2003). "The cost of disinflation in Colombia-A sacrifice ratio approach-", Archivos de Economia, DNP, 243.

35. SARGENT, T. (1999). The conquest of American inflation. Princeton, Princeton University press.

36. SARMIENTO, J. y RAMIREZ, A. (2005). "Los costos de la desinflacion en Colombia segun el modelo Buiter-Miller", Cuadernos de Economia, 24:129-159.

37. SCHMITT-GROHE, S., and URIBE, M. (2004). "Optimal simple and implementable monetary and fiscal rules", Journal of Monetary Economics, 54(6):1702-1725.

38. SCHORFHEIDE, F. (2000). "Loss function-based evaluation of DSGE models", Journal of Applied Econometrics, 15:645-670.

39. WEST, M., and HARRISSON, J. (1999). Bayesian forecasting and dynamic models. New York, Springer-Verlag.

40. ZHANG, L. (2005). "Sacrifice ratio with long-lived effects", International Finance, 2:231-262.

(1) A crawling band was implemented during the first years of the 90s.

(2) Typically, staggered contracts models have b[kappa]n criticized for not being able to reproduce the observed inflation persistence present in the data. Many modelers use tricks to induce persistence, like adding lags. One of the main implications of Erceg and Levin work is that inflation persistence is not only an inherent characteristic of the economy, but also that it can vary with the stability and transparency of the monetary policy regime.

(3) There are other approaches to measure inflation persistence, like the estimation of the autocorrelation function and unit root tests, to mention just two of them. Recently, in Colombia, Echavarria et al. (2010a) compare different measurements of statistical persistence and estimate the evolution of inflation and inflation gap persistence in Colombia for the period 1990-2010 using a regime switching model and a Kalman filter.

(4) Cogley, Primiceri, and Sargent (2007) and Cogley and Sbordone (2009) stress the importance in econometric models of inflation of recognizing the low frequency movement of inflation. They called it "trend inflation".

(5) A similar model has being used by Clark (1987, 1989) to decompose the us real GDP between trend and cycle

(6) One explanation could be that price formation in the non-traded sector could be better anchored to the inflation target. The non-traded sector comprises mainly service oriented businesses and construction firms. Real activity in this sector was depressed during the financial crisis and relative prices adjusted quickly. During the crisis the central bank tightened monetary policy to defend the exchange rate band and so, tighter monetary policy was associated with a significant real exchange rate depreciation (i.e. a collapse in the relative price of the non-traded goods). We speculate that this event may have caused a price setting behavior that puts an important weight on inflation expectations. This is an open question.

(7) We use strong priors for p and a because the model does not have capital and so the likelihood does not have information for their estimation.

(8) The results of this exercise are available upon request.

(9) S[kappa] figure 5. The inflation targeting regime starts from position 50 in the graph.

(10) We use an uninformative prior for the estimation of this parameter, as in Rabanal and Rubio-Ramirez (2005).

(11) We use the Laplace approximation to compute the posterior marginal density.

(12) To s[kappa] this we draw from the distribution of parameter A and find that its mode on the case of price indexation is 0.098, while in the case of imperfect credibility is 0.274.

Andres Gonzalez G. **

Franz Hamann ***

* The views expressed in this document are those of the authors and not necessarily those of the Banco de la Republica. We would like to thank specially to Angelo Gutierrez for his superb research assistance. We also thank the comments of two anonymous refer[kappa]s and those of the economists at the Second Monetary Policy Workshop in Latin America and the Caribbean on Monetary Policy, Uncertainty and the Business Cycle, in Lima. Of course, any mistake in this paper is our responsibility.

** Head of the Department of Macroeconomic Modeling, Banco de la Republica, Colombia.

*** Adviser to the Governor, Banco de la Republica, Colombia. Corresponding author: fhamansa@banrep.gov.co.

Table 1. Recent estimates of inflation persistence in Colombia. Estimated Authors Methodology Persistence Birchenall A recursive estimation of the 0.6 (1999) autoregressive component of cpi inflation is carried using data from 1965 to 1996 as an additional exercise to analyze inflation dynamics in Colombia. Capistran Econometric estimation of inflation 0.67 and Ramos- persistence for the 10 largest Latin- Francia (2009) American economies is done using univariate time-series methodologies and monthly data from 1980 to 2006. Estimation is also carried by sub- samples, selected by episodes of change in monetary policy regime. Reported inflation is the sum of autoregressive coefficients obtained using 2000-2006 sub-sample. Echavarria Estimation of inflation persistence 0.34 et al. (2010a) in Colombia is carried using monthly data form cpi inflation between 1990 and 2010 and several econometric methods including Markov-switching models and state-space models. Reported persistence is the estimate from the MSIAH model. Echavarria An unobserved components model with 0.31 et al. (2010b) regime-switching is used to estimate persistence, as a second order autoregressive process, and structural breaks for several inflation indexes using quarterly sample ranging from 1979 to 2010. Reported persistence is the mean of the sum of autoregressive coefficients for the 1989-1999 and 1999-2010 sub-samples. Authors Methodology Sample Birchenall A recursive estimation of the 1965-1996 (1999) autoregressive component of cpi inflation is carried using data from 1965 to 1996 as an additional exercise to analyze inflation dynamics in Colombia. Capistran Econometric estimation of inflation 2000-2006:06 and Ramos- persistence for the 10 largest Latin- Francia (2009) American economies is done using univariate time-series methodologies and monthly data from 1980 to 2006. Estimation is also carried by sub- samples, selected by episodes of change in monetary policy regime. Reported inflation is the sum of autoregressive coefficients obtained using 2000-2006 sub-sample. Echavarria Estimation of inflation persistence 1990:1-2010-6 et al. (2010a) in Colombia is carried using monthly data form cpi inflation between 1990 and 2010 and several econometric methods including Markov-switching models and state-space models. Reported persistence is the estimate from the MSIAH model. Echavarria An unobserved components model with 1979-2010 et al. (2010b) regime-switching is used to estimate persistence, as a second order autoregressive process, and structural breaks for several inflation indexes using quarterly sample ranging from 1979 to 2010. Reported persistence is the mean of the sum of autoregressive coefficients for the 1989-1999 and 1999-2010 sub-samples. Table 2. Credibility and inflation targeting in Colombia. Observed Expectation Target Mistake Year (1) (2) (3) (1)-(3) 1997 17.68 18.45 18.0 -0.32 1998 16.70 17.95 16.0 0.70 1999 9.23 15.789 15.0 -5.77 2000 8.75 9.89 10.0 -1.25 2001 7.65 8.85 8.0 -0.35 2002 6.99 6.95 6.0 0.99 2003 6.49 6.58 5.5 0.99 2004 5.50 6.13 5.5 0.00 2005 4.86 5.78 5.0 -0.14 2006 4.48 5.23 4.5 -0.02 2007 5.69 4.50 4.0 1.69 2008 7.67 4.14 4.0 3.67 2009 2.00 4.65 5.0 -3.00 2010 3.17 5.22 3.0 0.17 Surprise Anchoring (1)-(2) (2)-(3) Credibility 1997 1998 -0.77 0.45 1999 -1.25 1.95 2000 -6.56 0.79 2001 -1.14 -0.11 33.0 2002 -1.20 0.85 46.9 2003 0.04 0.95 69.1 2004 -0.09 1.08 16.1 2005 -0.63 0.63 70.4 2006 -0.92 0.78 79.0 2007 -0.75 0.73 90.1 2008 1.19 0.50 25.9 2009 3.53 0.14 6.3 2010 -2.65 -0.35 53.1 -2.05 2.22 93.8 Note: "Expectation" refers to the end-of-year inflation forecast made by banks and brokers in January of each year, according to the Inflation Expectations Survey of the Banco de la Republica. "Credibility" refers to the percentage of surveyed people which believed (at the beginning of the year) that the inflation target would be met that year. Source: Banco de la Republica, Colombia. Table 3. Sacrifice ratios for Colombia and Latin America. Authors Methodology SR Estimate Gomez and Julio Econometric estimation of a set of 0.79% (2000) equations describing the transmission mechanisms of monetary policy for Colombia using quarterly data from 1990Q1-2000Q1. The sacrifice ratio is calculated as the accumulated loss in the output gap after a shock of permanent reduction of annual inflation in 1%. Reyes(2003) Identification of different 0.89% disinflation periods in Colombia and estimation of the sacrifice ratio during each one using the standard methodology of Ball (1994) and alternative methodologies of Zhang (2001) and Cecchetti and Rich (1999). SR estimate corresponds to the result obtained using Zhang's method for the latest period analyzed. Sarmiento and Estimation of a monetary SVAR with 0.88% Ramirez (2005) short-run restrictions as in Buiter and Miller (1981). Estimates of the SR are the average difference of the steady-state product gap with and without inflationary shocks for the period 1998-2003, obtained from the historical decomposition of estimated shocks. Hamann et al. Simulation of a small open economy 0.04% (2005) DSGE calibrated for Colombia. Calculation of costs of reducing the steady state inflation rate from 5.5% to 3% from the transition dynamics simulated after this permanent shock. Hofstetter Identification of disinflation -0.57% (2007) episodes between 1973 and 2000 for 18 Latin-American Countries and estimation of average SR for each of the three decades composing the sample using three alternative methodologies. Results show a negative disinflation cost for the 1990-2000 period under all methodologies. Results reported are the average of this period estimate under the LL&L methodology. Authors Methodology Period Gomez and Julio Econometric estimation of a set of Simulated (2000) equations describing the transmission Scenario mechanisms of monetary policy for Colombia using quarterly data from 1990Q1-2000Q1. The sacrifice ratio is calculated as the accumulated loss in the output gap after a shock of permanent reduction of annual inflation in 1%. Reyes(2003) Identification of different 1991-2001 disinflation periods in Colombia and estimation of the sacrifice ratio during each one using the standard methodology of Ball (1994) and alternative methodologies of Zhang (2001) and Cecchetti and Rich (1999). SR estimate corresponds to the result obtained using Zhang's method for the latest period analyzed. Sarmiento and Estimation of a monetary SVAR with 1998-2003 Ramirez (2005) short-run restrictions as in Buiter and Miller (1981). Estimates of the SR are the average difference of the steady-state product gap with and without inflationary shocks for the period 1998-2003, obtained from the historical decomposition of estimated shocks. Hamann et al. Simulation of a small open economy Simulated (2005) DSGE calibrated for Colombia. Scenario Calculation of costs of reducing the steady state inflation rate from 5.5% to 3% from the transition dynamics simulated after this permanent shock. Hofstetter Identification of disinflation 1990-2001 (2007) episodes between 1973 and 2000 for 18 Latin-American Countries and estimation of average SR for each of the three decades composing the sample using three alternative methodologies. Results show a negative disinflation cost for the 1990-2000 period under all methodologies. Results reported are the average of this period estimate under the LL&L methodology. What percent the current output has to fell from its long run level, due to a reduction of 1% in trend inflation. Table 4. Estimated persistence for the different inflation rates. [[pi].sub [[pi].sub [[pi].sub .CPI] .CPI-T] b.CPI-NT] [??] 0.33 0.60 0.24 [[pi].sub [[pi].sub [[pi].sub .CPI-NF] .CPI-R] .CPI-B] [??] 0.66 -0.02 0.84 Table 5. Prior and posterior distributions: Imperfect and full credibility. Prior distribution Parameter Distribution Mean Std Dev 1/[sigma] Gamma 2.3 0.5 [eta] Gamma 3.2 0.5 [theta] Beta 0.36 0.02 [kappa] Uniform [0,1) 0.5 0.29 v Uniform [0,1) 0.5 0.29 [[gamma].sub.i] Uniform [0,1) 0.5 0.29 [[gamma].sub.[pi]] Normal 1.5 0.25 [[gamma].sub.y] Normal 0.13 0.13 [[rho].sub.a] Uniform [0,1) 0.5 0.29 [[rho].sub.g] Uniform [0,1) 0.5 0.29 [[sigma].sub.a] Inv. Gamma 0.01 [infinito] [[sigma].sub.g] Inv. Gamma 0.01 [infinito] [[sigma].sub.i] Inv. Gamma 0.01 00 [[sigma].sub.c] Inv. Gamma 0.01 00 log([??]) Posterior distribution Imperfect credibility model HPD-90 Parameter Mean Interval 1/[sigma] 5.38 (4.27-6.44) [eta] 3.40 (2.62-4.11) [theta] 0.29 (0.26-0.32) [kappa] 0.19 (0.07-0.32) v -- -- [[gamma].sub.i] 0.11 (0.00-0.22) [[gamma].sub.[pi]] 1.97 (1.68-2.24) [[gamma].sub.y] 0.14 (-0.07-0.34) [[rho].sub.a] 0.48 (0.38-0.59) [[rho].sub.g] 0.65 (0.58-0.74) [[sigma].sub.a] 0.008 (0.006-0.009) [[sigma].sub.g] 0.05 (0.04-0.06) [[sigma].sub.i] 0.028 (0.02-0.03) [[sigma].sub.c] 0.01 (0.008-0.01) log([??]) 940.06 Posterior distribution Full credibility model HPD-90 Parameter Mean Interval 1/[sigma] 5.05 (3.96-6.17) [eta] 4.15 (3.31-5.04) [theta] 0.37 (0.34-0.40) [kappa] -- -- v 0.84 (0.66-1.00) [[gamma].sub.i] 0.17 (0.04-0.29) [[gamma].sub.[pi]] 1.60 (1.31-1.89) [[gamma].sub.y] 0.22 (0.02-0.42) [[rho].sub.a] 0.47 (0.33-0.62) [[rho].sub.g] 0.72 (0.64-0.80) [[sigma].sub.a] 0.009 (0.007-0.01) [[sigma].sub.g] 0.05 (0.04-0.06) [[sigma].sub.i] 0.026 (0.02-0.03) [[sigma].sub.c] -- -- log([??]) 746.71 Table 6. Disinflation costs and monetary policy effort (basis points). P 0.95 0.96 0.97 0.98 0.99 Sacrifice ratio 75 77 80 83 86 Policy effort 134 138 144 149 155

Printer friendly Cite/link Email Feedback | |

Author: | Andres Gonzalez, G.; Hamann, Franz |
---|---|

Publication: | Revista Desarrollo Y Sociedad |

Article Type: | Report |

Date: | Jan 1, 2011 |

Words: | 10601 |

Previous Article: | Ensenanzas de una crisis: comentario a "la crisis internacional y cambiaria de fin de siglo en Colombia", de Miguel Urrutia y Jorge Llano. |

Next Article: | La indexacion de los saldos hipotecarios y la crisis colombiana de final del siglo XX. |

Topics: |