Printer Friendly

Weather-adjusting economic data.

III.A. NIPA Weather Adjustment

Unfortunately, the SWA steps described in the previous section cannot be applied to NIPA data because there is no way for researchers to replicate the seasonal adjustment process in these data, let alone to add weather effects to it. (26)

As an alternative, we instead apply weather adjustments directly to seasonally adjusted NIPA aggregates. We consider the model

(3) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

where [y.sub.t] is the quarter-over-quarter growth rate of real GDP or some component thereof, [s.sub.1t], ..., [s.sub.4t] are four quarterly dummies, (27) [w.sub.1t] is the unusual temperature in quarter t (defined as the simple average of daily values in that quarter), [w.sub.2t] is the unusual snowfall in quarter t (using the RSI index), and [d.sub.1t], ..., [d.sub.4t] are four quarterly variables, each of which takes on the value 1 in a particular quarter,--1 in the next quarter, and 0 otherwise. The particular specification in equation 3 has the property that no weather shock can ever have a permanent effect on the level of real GDP--any weather effect on growth has to be "paid back" eventually, although not necessarily in the subsequent quarter, given the lagged dependent variables. (28) Our sample period is 1990Q1-2015Q2, using September 2015 vintage data. Coefficient estimates are shown in table 7 for real GDP growth and selected components. For real GDP growth, unusual temperature is statistically significant in the first and second quarters.

We think that the assumption that no weather shock can have a permanent effect on the level of GDP is an important and reasonable restriction to impose. Nevertheless, we tested this restriction. We ran a regression of [y.sub.t] on four quarterly dummies, four lags of [y.sub.t], unusual temperature interacted with quarterly dummies, lags of unusual temperature interacted with quarterly dummies, unusual snowfall, and lagged unusual snowfall. In this specification, there were 18 free parameters--equation 3 is a special case of this, imposing five constraints that can be tested by a likelihood ratio test. The restriction is not rejected at the 5-percent level for GDP growth or any of the components, except government spending where the p value is 0.04.

Having estimated equation 3, we then compute the dynamic weather effect by comparing the original series to a counterfactual series where all unusual weather indicators are equal to zero ([w.sub.1t] = [w.sub.2t] = 0), but with the same residuals. The difference between the original and counterfactual series is our estimate of the weather effect.

Table 8 shows the quarter-over-quarter growth rates of real GDP and components in 2015Q1 and 2015Q2 both in the data as reported and after our proposed weather adjustment. Weather adjustment raises the estimate of growth in the first quarter from 0.6 percentage point at an annualized rate to 1.4 percentage points. However, the estimate of growth in the second quarter is lowered from 3.7 to 2.8 percentage points. Weather adjustment makes the acceleration from the first quarter to the second quarter less marked.

III.B. Residual Seasonality

Our paper is about the effects of weather on economic data, not seasonal adjustment. But an unusual pattern has prevailed for some time in which first-quarter real GDP growth is generally lower than growth later in the year, raising the possibility of "residual seasonality"--the Bureau of Economic Analysis (BEA)'s reported data may not adequately correct for regular calendar-based patterns. This is a factor, separate from weather, that might have lowered reported growth in 2015Q1. Glenn Rudebusch, Daniel Wilson, and Tim Mahedy (2015) apply the X-12 seasonal filter to reported seasonally adjusted aggregate real GDP and find that their "double adjustment" of GDP makes a substantial difference. (29)

The BEA has subsequently revisited its seasonal adjustment and made changes in the July 2015 annual revision. The changes might have mitigated residual seasonality, but it is important to note that the BEA has not published a complete historical revision to GDP and its components, instead only reporting improved seasonally adjusted data starting in 2012. We did an exercise in the spirit of Rudebusch, Wilson, and Mahedy (2015) by taking our weather-adjusted aggregate real GDP (and components) data and putting them through the X-13 filter. This double seasonal adjustment is admittedly an ad hoc procedure, especially given that BEA uses a different seasonal adjustment method for data after 2012 than for data before 2012; consequently, we treat our procedure's results with particular caution. Nonetheless, the resulting growth rates in the first two quarters of 2015 are also shown in table 7. After these two adjustments, growth was quite strong in the first quarter, but weaker in the second quarter, which is the opposite of the picture one obtains using published data. It is interesting to note that the "double seasonal adjustment" has an especially large effect on investment and exports, suggesting that these are two areas in which seasonal adjustment procedures might benefit from further investigation.

IV. Conclusion

Seasonal effects in macroeconomic data are enormous. These seasonal effects reflect, among other things, the consequences of regular variation in weather over the year. However, the seasonal adjustments that are applied to economic data are not intended to address deviations of weather from seasonal norms. Yet these weather deviations have material effects on macroeconomic data. Recognizing this fact, this paper has operationalized an approach for simultaneously controlling for both normal seasonal patterns and unusual weather effects. Our main focus has been on monthly employment data in the CES, or the "establishment survey." The effects of unusual weather can be very important, especially in the construction sector and in the winter and early spring months. Monthly payroll changes are somewhat more persistent for seasonally-and-weather adjusted data than for ordinary seasonally adjusted data, suggesting that this gives a better measure of the underlying momentum of the economy.

The physical weather indicators considered in this paper are all available on an almost real-time basis--the reporting lag is inconsequential. The National Centers for Environmental Information make daily summaries for 1,600 stations available with a lag of less than 48 hours. In addition, the regional snowfall impact indexes that we use are typically computed and reported within a few days after a snowstorm ends. One weather indicator that we considered is the number of absences from work due to weather. This has a somewhat longer publication lag, but by construction is still available at the time of the employment report.

It would be good if weather adjustments of this sort could be implemented by statistical agencies as part of their regular data reporting process. Because they have access to the underlying source data, they have more flexibility in doing so than the general public--for example, some of the 150 disaggregates in the CES are not available until the first revision. Statistical agencies want data construction to use transparent methods that avoid ad hoc judgmental interventions, and that can be done for weather adjustment. U.S. statistical agencies nevertheless face severe resource constraints, and weather adjustment might well have an insufficiently high priority. In that case, weather adjustment could be implemented by end users of the data. We do not think weather-adjusted economic data should ever replace the underlying existing data, but as this paper demonstrates, weather adjustment can be a useful supplement to measure underlying economic momentum.

ACKNOWLEDGMENTS We are grateful to Katharine Abraham, Roc Armenter, Bob Barbera, Mary Bowler, Francois Gourio, Claudia Sahm, Tom Stark, and the editors for helpful discussions, and to Natsuki Arai for outstanding research assistance. All errors are our sole responsibility. The views expressed here are those of the authors and do not necessarily represent those of the Federal Reserve Bank of Philadelphia or the Federal Reserve System.

References

Andreou, Elena, Eric Ghysels, and Andros Kourtellos. 2010. "Regression Models with Mixed Sampling Frequencies." Journal of Econometrics 158, no. 2: 246-61.

Bernanke, Ben S. 2012. "Economic Outlook and Policy." Testimony before the Joint Economic Committee, U.S. Congress, June 7. http://www.federalreserve. go v/newsevents/testimony/beranke20120607a.htm

Blake, Eric S., Christopher W. Landsea, and Ethan J. Gibney. 2011. "The Deadliest, Costliest, and Most Intense United States Tropical Cyclones from 1851 to 2010 (and Other Frequently Requested Hurricane Facts)." NOAA Technical Memorandum NWS NHC-6. Miami: National Weather Service, National Hurricane Center.

Bloesch, Justin, and Francois Gourio. 2014. "The Effect of Winter Weather on U.S. Economic Activity." Economic Perspectives 39, no. 1: 1-20.

Dell, Melissa, Benjamin F. Jones, and Benjamin A. Olken. 2012. "Temperature Shocks and Economic Growth: Evidence from the Last Half Century." American Economic Journal: Macroeconomics 4, no. 3: 66-95.

Faust, Jon, and Jonathan H. Wright. 2013. "Forecasting Inflation." In Handbook of Economic Forecasting 2A, edited by Graham Elliott and Allan Timmermann. Amsterdam: North-Holland.

Fisher, R. A. 1925. "The Influence of Rainfall on the Yield of Wheat at Rothamsted." Philosophical Transactions of the Royal Society of London (Series B) 213: 89-142.

Foote, Christopher L. 2015. "Did Abnormal Weather Affect U.S. Employment Growth in Early 2015?" Current Policy Perspectives no. 15-2. Federal Reserve Bank of Boston.

Ghysels, Eric, Pedro Santa-Clara, and Rossen Valkanov. 2004. "The MIDAS Touch: Mixed Data Sampling Regression Models." Working Paper. http://docentes. fe.unl.pt/~psc/MIDAS.pdf

--. 2005. "There Is a Risk-Return Trade-Off after All." Journal of Financial Economics 76, no. 3: 509-48.

Gilbert, Charles E., Norman J. Morin, Andrew D. Paciorek, and Claudia R. Sahm. 2015. "Residual Seasonality in GDP." FEDS Notes. Washington: Board of Governors of the Federal Reserve System.

Kocin, Paul J., and Louis W. Uccellini. 2004. "A Snowfall Impact Scale Derived from Northeast Snowfall Distributions." Bulletin of the American Meteorological Society 85, no. 2: 177-94.

Ladiray, Dominique, and Benoit Quenneville. 2001. Seasonal Adjustment with the X-11 Method. New York: Springer.

Macroeconomic Advisers. 2014. "Elevated Snowfall Reduced Q1 GDP Growth 1.4 Percentage Points." Blog post, http://www.macroadvisers.com/2014/04/ elevated-snowfall-reduced-q1-gdp-growth-1-4-percentage-points/

Manski, Charles F. 2015. "Communicating Uncertainty in Official Economic Statistics: An Appraisal Fifty Years after Morgenstern." Journal of Economic Literature 53, no. 3: 1-23.

Miguel, Edward, Shanker Satyanath, and Ernest Serengeti. 2004. "Economic Shocks and Civil Conflict: An Instrumental Variables Approach." Journal of Political Economy 112, no. 4: 725-53.

Rudebusch, Glenn D., Daniel Wilson, and Tim Mahedy. 2015. "The Puzzle of Weak First-Quarter GDP Growth." Economic Letter no. 2015-16. Federal Reserve Bank of San Francisco.

Squires, Michael F., Jay H. Lawrimore, Richard R. Heim Jr., David A. Robinson, Mathieu R. Gerbush, and Thomas W. Estilow. 2014. "The Regional Snowfall Index." Bulletin of the American Meteorological Society 95, no. 12: 1835-48.

World Meteorological Organization. 2011. "Guide to Climatological Practices." WMO-No. 100. Geneva.

Wright, Jonathan H. 2013. "Unseasonal Seasonals?" Brookings Papers on Economic Activity, Fall: 65-110.

Yellen, Janet L. 2014. "The Economic Outlook." Testimony before the Joint Economic Committee, U.S. Congress, May 7. http://www.federalreserve.gov/newsevents/ testimony/yellen20140507a.htm

--. 2015. "Semiannual Monetary Policy Report to the Congress." Testimony before the Committee on Financial Services, U.S. House of Representatives, July 15. http://www.federalreserve.gov/newsevents/testimony/yellen20150715a.htm

Comments and Discussion

COMMENT BY

KATHARINE ABRAHAM I take away two main conclusions from this very useful paper. First, the authors have convinced me that, at least on occasion, unusual weather can cause real problems for interpreting the monthly payroll employment estimates produced by the Bureau of Labor Statistics (BLS). Second, I am also convinced that it is possible to use data on temperature, snowfall, and so on to identify the systematic effects of unusual weather on the payroll employment series and, if desired, to remove those effects from the data. My comments mainly address whether and how the approach the authors have developed might best be applied in the production of official employment statistics. Although the paper focuses primarily on the payroll employment data, as do my comments, similar issues could be raised regarding other economic time series, and I look forward to future work that explores the effects of weather on economic measurement more broadly.

The payroll employment estimates on which most data users rely are adjusted to remove the effects of normal seasonal variation in the weather along with the effects of other predictable seasonal influences. These adjustments are not intended to account for the effects of weather that is better or worse than usual for the time of year. As the paper demonstrates, the direct effects of unusual weather on employment in the affected month can be relatively large. In addition to its direct effects, unusual weather also can cause distortions in the seasonal factors used to adjust employment estimates in other months. For example, an unusually large snowstorm that depresses employment one February might lead to a lowered expectation for employment levels in the next several Februaries. If the weather were more normal the following February, employment could look stronger than it really was. The approach described in the paper removes both the direct and the indirect effect of unusual weather from the monthly employment estimates. It would be possible, however, to use these same methods to remove the influence that unusual weather can have on seasonal adjustment factors without removing the direct effects of unusual weather on employment in the month in which it occurs. I will come back to this point.

CALENDAR EFFECTS AND WEATHER EFFECTS In reading the paper, I was struck by the parallels between the weather effects that are its subject and the calendar effects that plagued the interpretation of payroll employment data in years past. The calendar effect with the largest effects on the payroll employment series is the so-called 4-week/5-week effect. Depending on the year, there may be either a 4-week interval or a 5-week interval between the weeks in adjacent months that include the 12th of the month and are used to determine the payroll period for which employers are asked to report. The length of this interval can have an important effect on measured employment growth. In construction, to take an example of an industry where the 4-week/5-week effect can be especially important, employment tends to rise through the spring as the weather improves, meaning that the raw growth in employment from March to April is generally larger when the interval between payroll reference periods is longer. Before this was accounted for in estimation, the growth in seasonally adjusted construction employment in a year with 4 weeks between the March and April reference periods that followed years with a 5-week interval tended to look weaker than it actually was, since the seasonal expectation for the March-to-April change was heavily influenced by the larger cumulative upswing associated with a 5-week interval. Conversely, the growth in seasonally adjusted construction employment in a year with 5 weeks between the March and April reference periods could look stronger than it actually was, especially if that year followed years with a 4-week interval (Cano and others 1996).

Through the mid-1990s, discussion of the monthly employment numbers frequently included statements that were strikingly similar in tone and content to statements about the effects of weather on the numbers quoted by Boldin and Wright. "The Employment Situation: April 1995," for example, includes the following statement:

The lack of job growth between March and April may have reflected an unusual set of circumstances.... The seasonal buildup in services, retail trade, and construction from March to April had been relatively large in the previous 3 years (1992-94), partly because in each case there were 5 weeks between the two collections. As a result, this year's seasonal "expectation" (which is based primarily on the prior 3 years) was relatively large. With only 4 weeks separating the surveys, however, the time period for which hiring could take place was reduced. All of this likely made employment in April appear weaker than it actually was. (BLS 1995)

The likelihood that having a 4-week rather than a 5-week interval between March and April had affected the data was noted in news stories at the time (for example, see Georges 1995). Payroll survey estimation procedures that removed the so-called 4-week/5-week effect from the seasonally adjusted data were introduced for most industries in 1996 and for construction in 1997.

Different calendar effects have the potential to confound the interpretation of other economic time series. It has long been recognized that flow series such as those for production, shipments, and sales may be affected by the number of working or trading days in the month or by the timing of holidays (Young 1965; Findley and others 1998). In the monthly payroll survey, hours of work tend to be lower than would otherwise be the case when there are fewer workdays during the month or when Good Friday or Labor Day falls during the survey reference period (BLS 2015). Over time, the federal statistical agencies have developed procedures to remove these sorts of calendar effects from published seasonally adjusted estimates.

The present paper proposes that procedures similar to those used to remove calendar effects could be used to remove the effects of unusual weather from published economic data series. Whether this would be a good idea depends on what purpose the adjustments statistical agencies make to economic data series should serve. One worthy goal of such adjustments is to produce series that do a better job of capturing underlying trends. A second and somewhat different goal is to produce series that are easier for statistically unsophisticated data users to understand.

With respect to the removal of calendar effects from published seasonally adjusted data, these two goals seem to me to be largely in alignment. That is, analysts are likely to prefer series from which calendar effects have been removed, and I would guess that the typical person on the street also would understand that one does not want, for example, to say employment is growing faster or slower just because the normal seasonal upswing in employment has been measured over a longer or shorter interval.

With respect to the removal of weather effects from published seasonally adjusted data, however, the goal of producing a series that better captures an underlying trend may lead to a different conclusion than the goal of producing a series that is easier for statistically unsophisticated data users to understand. Imagine a situation in which a large blizzard had shut down economic activity across much of the country for an extended period of time. Analysts might find an employment estimate from which the effects of that blizzard have been removed to be more useful as an indicator of underlying trends. It is difficult, however, for me to imagine the commissioner of labor statistics reporting such an estimate to the public as the official measure of what had happened to employment during the month. A number that represented what would have happened to employment if there had been no blizzard undoubtedly would be of analytical interest, but it would lack face validity as a representation of reality. For that reason, although I would value changes to its procedures that allowed the BLS to remove the distortions to seasonal factors potentially associated with unusual weather and also to better quantify the direct effects of weather on published employment estimates, I would be uncomfortable with incorporating weather adjustments of the sort described in the paper into the featured payroll employment figures.

HOW THE BLS HANDLES WEATHER ADJUSTMENT As background for thinking about how the BLS might apply the methods developed by Boldin and Wright to improve monthly payroll employment estimates, it may be useful to say a little bit about how unusual weather is handled by current BLS seasonal adjustment procedures. Seasonal adjustment of the payroll employment data is implemented by producing seasonally adjusted estimates for detailed estimation cells and then summing the resulting numbers to create seasonally adjusted employment estimates for more aggregated industries and for the nonfarm business sector as a whole. As already mentioned, current BLS procedures are not designed to account directly for the effects of unusual weather, but an estimate for a particular estimation cell that is deemed to be an outlier--as might be the case if unusually good or unusually bad weather had an especially large effect on the number for the estimation cell--may be excluded for the purpose of calculating seasonal factors. Outside the construction industry, however, this rarely happens.

Special procedures to address the effects of unusual weather on construction employment have been in place since 1997 (Kropf 1996; Getz 1997). One year earlier, in 1996, new procedures to address the 4-week/ 5-week calendar effect in the payroll employment data had been introduced. Because the effects of weather on construction employment are so large, however, usable 4-week/5-week adjustment factors could not be estimated for construction without taking weather effects into account, and the implementation of the new 4-week/5-week procedures in construction had to be delayed. This made it a priority to develop some method for addressing the effects of weather on construction employment.

Within construction, payroll employment estimation, including seasonal adjustment, is carried out at the most detailed industry level for which data are available--either the 5-digit or 6-digit North American Industry Classification System level--and, where possible, separately for each of four regions. Estimates for the relevant detailed industry cells or the detailed industry by region cells then are summed to produce national estimates for published industries. Within construction, the bounds used to determine whether a monthly estimate is an outlier are set to be tighter so that estimates are more likely to fall outside the defined bounds and be classified as outliers. Analysts verify apparent outliers in the construction employment estimates as weather-related by checking against information from the National Weather Service and then, if appropriate, they remove the outliers from the data series used to calculate seasonal factors.

As a historical footnote regarding the approach the BLS has adopted to deal with the effects of unusual weather on construction employment, I have been told that when developing its special procedures for construction, the BLS asked the National Weather Service for data on average temperature to use in estimating the effects of weather but was turned down. As I understand it, the National Weather Service explained that weather conditions can vary considerably across different parts of the country and information on average temperature would be meaningless. Boldin and Wright make a good case that measures of average weather could in fact have been very useful! That said, recognizing that there is variation in weather conditions across different parts of the country could allow the BLS to improve on Boldin and Wright's suggested method of accounting for weather effects.

THE CHALLENGE OF GEOGRAPHIC VARIATION Consider the effects of temperature on employment. The measure employed in the analysis reported in the paper is a measure of the average across weather stations of the deviation of temperature from its normal level at that weather station in a given month. In many months, however, conditions may be unusually hot in some areas but unusually cold in others. As an illustration, my figure 1, a chart prepared by the High Plains Regional Climate Center and disseminated by the National Weather Service, displays the deviations of the average temperatures from their historical mean levels in different areas for March 2015. Temperatures were considerably below average that month in the Northeast but considerably above average in the Southeast and West. Similar variation may be observed in the monthly data for snowfall, precipitation, and so on.

[FIGURE 1 OMITTED]

This sort of variation would not matter if the effects of deviations from normal weather conditions were both linear and of the same magnitude in all locations. This is unlikely to be the case. The effect of being above or below average with respect to temperature, snow, or other weather conditions in a month can vary substantially by region. Weather that was 10 degrees warmer than usual during February, for example, could have a significant effect on employment in Boston but no effect on employment in Phoenix. This implies that a warmer-than-usual February might or might not be associated with higher-than-average employment, depending on where the warmer-than-usual weather occurred. Similarly, an extra six inches of snow might have no effect on employment if it falls in Minneapolis, but a disastrous impact on employment if it falls in Atlanta. Again, in a month in which average snowfall was greater than expected, it would matter where the extra snow had fallen. The fact that Boldin and Wright obtain better model fits with their preferred snow variable--constructed as the weighted average of regional measures of the societal effects of different storms rather than average snowfall--is consistent with the idea that deviations of weather from its norm may have different effects in different regions. Thus, important information is lost by relying on national average weather measurements to make the weather adjustments.

As explained in the paper, it would not have been possible for Boldin and Wright to implement a geographically disaggregated weather adjustment using published BLS data. The published national employment series refers to the country as a whole, and the state-level employment estimates that the BLS also publishes do not sum to the national estimates. Internally, however, the BLS already makes use of regionally disaggregated estimation cells for construction employment, and conceivably it could do the same for other weather-sensitive industries. This means that, at least in construction, the BLS already has a natural platform in place for incorporating regional weather information into its estimation procedures. My guess is that weather adjustments based on regional weather data might be at least somewhat larger in size than those reported by Boldin and Wright, though this is of course an empirical question.

THE CHALLENGE OF PAYROLL VARIATION AMONG INDUSTRIES There is one other respect in which the methods outlined by Boldin and Wright might be improved upon. As explained in the paper, the weather variables used for adjusting the employment data are created by weighting weather measurements for the 30 days prior to the 12th of the month, with the coefficients of the parametric function used to define the relative weights accorded to different days selected to maximize the fit with national employment data. The important point is that these relative weights are restricted to be the same across all industries. It seems plausible, however, that the relative importance of weather on different days prior to the 12th could vary across industries. It might matter, for example, whether work in the industry is done inside or outside, whether employees in the industry are able to work remotely, and whether and how weather affects the demand for the industry's products or services. I suspect that improving the weights accorded to weather on the different days in the month before the 12th is a second-order issue, but it might nonetheless be worth investigating.

CONCLUSION Supposing that the BLS were to decide to adopt the methods developed by Boldin and Wright--something that I think is very much worth considering--there is still the question of exactly how they would be used. One obvious application would be to use Boldin and Wright's methods in developing seasonal adjustment factors for the official payroll employment statistics that are not contaminated by the effects of unusual weather. I also would like the BLS to report the estimated magnitude of the effects of weather on each month's employment and perhaps even to prepare research or supplemental series from which weather effects have been removed. From my perspective, however, to the extent that weather affects the level of employment in a particular month, that should be reflected in the official payroll employment numbers.

REFERENCES FOR THE ABRAHAM COMMENT

BLS (Bureau of Labor Statistics). 1995. "The Employment Situation: April 1995." News Release. Washington, http://www.bls.gov/news.release/history/empsit_050595.txt

--. 2015. "Technical Notes for the Current Employment Statistics Survey." Washington, http://www.bls.gov/web/empsit/cestn.pdf

Cano, Stephanie, Patricia Getz, Jurgen Kropf, Stuart Scott, and George Stamas. 1996. "Adjusting for a Calendar Effect in Employment Time Series." In Proceedings of the Survey Research Methods Section. Alexandria: American Statistical Association.

Findley, David F., Brian C. Monsell, William R. Bell, Mark C. Otto, and Bor-Chung Chen. 1998. "New Capabilities and Methods of the X-12-ARIMA Seasonal Adjustment Program." Journal of Business and Economic Statistics 16, no. 2: 127-52.

Georges, Christopher. 1995. "Economists Say Chances of a Recession Remain Slim, in Spite of Recent Data." Wall Street Journal, May 8.

Getz, Patricia. 1997. "Improved Seasonal Adjustment for Construction." Unpublished memorandum for George Werking, April 15. Washington: U.S. Bureau of Labor Statistics.

Kropf, Jurgen. 1996. "4/5 Adjustment for Construction, SIC 2015, 2016 and 2017." Unpublished memorandum for Pat Getz, August 21. Washington: U.S. Bureau of Labor Statistics.

Young, Allan. 1965. "Estimating Trading-Day Variation in Monthly Economic Time Series." Technical Paper no. 12. Washington: U.S. Bureau of the Census.

COMMENT BY

CLAUDIA SAHM (1) Michael Boldin and Jonathan Wright introduce a new method for estimating the impact of weather on key economic data series, like monthly payroll employment. Their aim is to provide a clearer view of business-cycle fluctuations by removing weather effects. The authors extend a widely used seasonal adjustment algorithm, which already isolates calendar effects and with it the "usual" weather changes over the year. They add a first-stage estimation to the algorithm with a direct measure of weather, so their extended algorithm isolates the impact of both usual and unusual weather.

Trying to estimate the impact of unusual weather events on economic data has a long history among macroeconomic forecasters, so the contribution of this paper is a technical improvement: examining a large set of weather measures, using disaggregated industry data, and working within the existing seasonal adjustment framework. While there is more work to be done, this analysis could serve as the basis for systematic weather adjustment in official statistics.

This new seasonal-and-weather adjustment algorithm would be particularly useful to individuals who need to interpret economic conditions in real time. The difference between slow demand due to severe weather and slow demand due to an incipient recession is crucial to many economic decisionmakers, including central bank officials setting interest rate policy and business managers weighing new investments. In fact, the importance of isolating weather effects is borne out by the cottage industry of macroeconomic forecasters who have provided such estimates for years.

Nonetheless, the winter of 2014 provides a good example of how this paper can add value. During that period, the country experienced one set of weather conditions and one realization of economic activity (though the latter did revise over time), and yet there was a wide range of professional estimates on the output effect from the severe weather. Macroeconomic Advisers (2014) at the time estimated that "elevated snowfall ... reduced first-quarter GDP growth by 1.4 percentage points," while an analysis from Goldman Sachs maintained that "weather [would] cause first-quarter GDP to be 0.5 [percentage] point worse than it otherwise would have been" (Goldstein 2014). Federal Reserve staff characterized the weak GDP data this way: "Unusually severe winter weather could account for some, but not all, of the recent unanticipated weakness" (FOMC 2014).

And while few decisions hinge on the exact estimates of weather effects, the extent to which a shift in economic activity can be explained by weather is important. That is because the weather events considered by the authors, such as a severe winter storm, are viewed as a temporary shock and something that most economic decisionmakers should see through. A snowstorm may keep a consumer from buying a car at the end of January, but presumably when the weather clears, she will still buy the car. That kind of short-term delay--shifting output from one month to the next--should not concern policymakers, though a drop in car purchases due to diminished job prospects would. In real time, when one does not yet know the next month's or next quarter's data, the source for a drop in spending can be difficult to determine.

I applaud the authors' efforts to bring more technical discipline to estimating weather effects, yet I have three concerns with the paper. First, I think there needs to be more discussion about the relative importance of unusual weather and the danger of elevating this transitory shock simply because it is something visceral. Business-cycle fluctuations will always be somewhat obscured by noise in the data. Second, I think the authors need to do more to develop the diagnostics of the algorithm. There needs to be clearer guidance on when to use their seasonal-and-weather adjustment similar to the guidance from statistical agencies on when to use the standard seasonal adjustment. And third, in making inferences about weather's impact, one needs to explore how the weather impact may depend on the business-cycle conditions. I am concerned that this research brings us from removing usual winter weather in usual business-cycle conditions to removing both unusual and usual weather in usual business-cycle conditions. This is a step forward, but it may not fully capture how much a particular month's or quarter's data are affected by a weather event. Before we begin filtering all our economic data with this new algorithm, we need to think more about the counterfactual--what the world would have looked like without the weather event--and the variation we would be removing from our economic analysis.

High-frequency economic data can be quite noisy. For example, the 90-percent confidence interval on the monthly change in total nonfarm payroll employment is plus or minus 115,000. (2) Many of the weather effects that the authors highlight, such as the 64,000 reduction in payroll employment in February 2014, are well within the confidence intervals that reflect sampling and nonsampling error. Still, unusual weather occurs often enough, and comments from Federal Open Market Committee (FOMC) minutes in the month of March for recent years show that we need a reliable method for isolating such weather effects (see the first column in my table 1). Nonetheless, seasonally-and-weather adjusted data should not give us a false sense of clarity. The second column of my table 1 shows other, non-weather events that were mentioned in the same FOMC minutes as also obscuring underlying economic conditions. The regularity of some shocks early in the year, such as discretionary changes in fiscal policy, also caution against writing off all the recent first-quarter weakness in recent years as an inability to remove calendar or weather effects.

The authors weather-adjust all the data series regardless of how well the weather model fits an industry series, but this decision is at odds with the standard use of seasonal adjustment in official statistics. For example, consider this statement from the U.S. Census Bureau:

The Census Bureau performs seasonal adjustment of a time series of estimates only given clear evidence of seasonal behavior and only when the adjustment passes a suitable set of diagnostic tests. (McDonald-Johnson and others 2010)

Charles Gilbert and others (2015) provide an example of using such diagnostic tests to examine residual seasonality in output data. Stability of the adjustment factors is a guiding principle for the decision as to when it is appropriate to seasonally adjust a data series. One might view the diagnostic tests for seasonal adjustment in official series as too stringent, but there needs to be further analysis of how stable the seasonal-and-weather adjustments in the Boldin and Wright paper are. The stability of the weather impact estimates may be improved by focusing on series that show a clear weather impact, such as construction employment.

Finally, it is important to take a step back and think about the variation being removed with the seasonal-and-weather adjustment. Consumers, employers, and even policymakers experience the economy with all its seasonal and weather-related variation, so using adjusted data misses the opportunity to study that variation. Robert Barsky and Jeffrey Miron (1989) argue, for example, that seasonal variation could be used to test macroeconomic models, yet macroeconomic studies with not seasonally adjusted data are exceedingly rare. Of related concern, the weather impact may not be neatly separable from underlying economic conditions. Alan Auerbach and Yuriy Gorodnichenko (2012) argue that the impact of discretionary government spending on output (the fiscal multiplier) is larger in recessions than in expansions. Likewise--and a point acknowledged but not explored by the authors--a severe snowstorm may have a different impact on activity during a recession than during an expansion. This would complicate the full removal of weather effects, and the removal, even partially, may sacrifice some information on underlying economic conditions.

As a simple example of how weather might interact with the business cycle conditions, I estimated a standard model of monthly retail sales growth (RS Growth), which includes heating degree days (HDD), consumer sentiment (Sent), and an interaction between them. I chose sentiment as a business cycle indicator because, unlike some employment series, it does not vary with the weather measure in the regression. The results of the estimation are represented below, with standard errors in parentheses. (3)

RS Growth = 0.28 - [0.25HDD.sub.t] + [0.14HDD.sub.t-1] (0.045) (0.047) (0.047)

+ 0. 10[Sent.sub.t] - [0.09HDD.sub.t] x [Sent.sub.t] (0.042) (0.049)

Unusually cold weather, which is a positive heating degree day reading, depresses the growth in retail spending in the current month and boosts it in the subsequent month, highlighting the transitory nature of weather shocks. This is a well-known feature of retail sales growth--an example of how weather estimates are often done. The positive association between retail sales growth and sentiment is also standard. The additional feature of this simple model, as shown in my figure 1, is that unusually cold weather weighs more on retail spending growth at times when sentiment is high, measured as one standard deviation above average.

Intuitively, it makes sense that if economic activity is picking up and consumer sentiment is high, a severe winter storm would imply a large drag on growth, since there is more growth to disrupt relative to the counter factual world of normal weather. In isolating the impact of weather, it is better to remove the "average sentiment" and heating degree effect, which is closest in spirit to the authors' seasonal and weather-adjusted data but that does not mean that all of the weather impact has been removed. And this also leaves open an interesting question about how consumers or employers interpret these weather shocks, which in real time would be hard to distinguish from other economic shocks. With its careful technical treatment of estimating weather effects, this paper should serve as an invitation to think more about what weather's impact is on the economy.

[FIGURE 1 OMITTED]

REFERENCES FOR THE SAHM COMMENT

Auerbach, Alan J., and Yuriy Gorodnichenko. 2012. "Measuring the Output Responses to Fiscal Policy." American Economic Journal: Economic Policy 4, no. 2: 1-27.

Barsky, Robert B., and Jeffrey A. Miron. 1989. "The Seasonal Cycle and the Business Cycle." Journal of Political Economy 97, no. 3: 503-34.

FOMC (Federal Open Market Committee). 2010. "Minutes of the Federal Open Market Committee: March 16, 2010." Washington: Board of Governors of the Federal Reserve System, http://www.federalreserve.gov/monetarypolicy/ fomcminutes20100316.htm

--. 2011. "Minutes of the Federal Open Market Committee: March 15, 2011." Washington: Board of Governors of the Federal Reserve System. http://www. federalreserve.gov/monetarypolicy/fomcminutes20110315.htm

--. 2012. "Minutes of the Federal Open Market Committee: March 13, 2012." Washington: Board of Governors of the Federal Reserve System. http://www. federalreserve.gov/monetarypolicy/fomcminutes20120313.htm

--. 2013. "Minutes of the Federal Open Market Committee: March 19-20, 2013." Washington: Board of Governors of the Federal Reserve System, http:// www.federalreserve.gov/monetarypolicy/fomcminutes20130320.htm

--. 2014. "Minutes of the Federal Open Market Committee: March 18-19, 2014." Washington: Board of Governors of the Federal Reserve System, http:// www.federalreserve.gov/monetarypolicy/fomcminutes20140319.htm

--. 2015. "Minutes of the Federal Open Market Committee: March 17-18, 2015." Washington: Board of Governors of the Federal Reserve System, http:// www.federalreserve.gov/monetarypolicy/fomcminutes20150318.htm

Gilbert, Charles E., Norman J. Morin, Andrew D. Paciorek, and Claudia R. Sahm. 2015. "Residual Seasonality in GDP." FEDS Notes. Washington: Board of Governors of the Federal Reserve System.

Goldstein, Steve. 2014. "March Was Colder but Less Snowy Than Usual, So Data Should Get Lift, Forecaster Says." Blog post, April 1. MarketWatch.

Macroeconomic Advisers. 2014. "Elevated Snowfall Reduced Q1 GDP Growth 1.4 Percentage Points." Blog post, April 16. http://www.macroadvisers.com/ 2014/04/elevated-snowfall-reduced-q1-gdp-growth-1-4-percentage-points/

McDonald-Johnson, Kathleen M., Brian Monsell, Ryan Fescina, Roxanne Feldpausch, Catherine C. Harvill Hood, and Monica Wroblewski. 2010. "Seasonal Adjustment Diagnostics." Census Bureau Guideline, Version 1.1. Washington: U.S. Census Bureau. https://www.census.gOv/ts/papers/G18-0_v1.1_Seasonal_Adjustment.pdf

(1.) I am thankful to Steve Braun, Tyler Cowen, Charles Gilbert, Norman Morin, and Andrew Paciorek for helpful conversations that informed my comments, and to Erik Larsson for his great research assistance. These are my views and are not necessarily shared by others in the Federal Reserve System or the U.S. government.

(2.) See Bureau of Labor Statistics, "Employment Situation Technical Note," February 5, 2016, http://www.bls.gov/news.release/empsit.tn.htm.

(3.) The regression is estimated with monthly data from January 1999 to July 2015, and the [R.sup.2] is 0.17. All of the coefficients are statistically significant at the 10 percent level. The dependent variable is seasonally adjusted retail sales excluding autos, gasoline, and building materials; this is the portion of the retail sales data used by the Bureau of Economic Analysis in its estimate of personal consumption expenditures, and it accounts for roughly one-fifth of GDP.
Table 1. Events Noted as Obscuring Underlying Economic
Conditions in March FOMC Minutes, 2010-15

                  Weather                       Non-weather

2015   "unseasonably cold winter       "labor disputes at West Coast
       weather"                        ports"

2014   "unusually cold and snowy       "partial government shutdown"
       winter weather"

2013   --                              "federal spending
                                       sequestration"

2012   "unseasonably warm weather"     --

2011   "weather-related distortions    "earthquake, tsunami"
       in various indicators"

2010   "adverse effects of the         "waning effects of fiscal
       snowstorms"                     stimulus"

Sources: FOMC (2010, 2011, 2012, 2013, 2014, 2015).


GENERAL DISCUSSION Jonathan Pingle wondered if Michael Boldin and Jonathan Wright had checked the stability of the coefficients that they estimated in their model. Pingle noted that, in his own work, the impact of snowfall appeared to have changed over the course of the past three decades.

Pingle also wondered if there is evidence of asymmetry in the data; that is, does the weather being better-than-normal by a certain amount have an equal and opposite effect as the weather being worse than normal by the same amount? As an example, he noted that it is often the case where it seems like a very cold March will be followed by instant bounce-back in April, but a very warm March sometimes seems to pull forward seasonal inflows for several months. He was curious if that kind of asymmetry might be driving some of the lagged effects mentioned in the paper.

Along similar lines, David Romer wondered about the assumption often made by short-term forecasters of full bounce-back due to the effects of weather; on average, that is, are the negative effects of bad weather made up the following month as the weather on average returns to normal? For example, if weather reduced employment in one month by 50,000 jobs, should it really be assumed that the next month is going to add those 50,000 jobs? Perhaps that would make sense, he noted, in a world where firms have a set number of people they want to hire, and they can just go out and get them. But perhaps matches that do not occur in one month are not magically formed in the next.

Pingle wondered if Boldin and Wright had given any thought to the implications of the difference observed between the weather effects in the initial release of the Current Employment Statistics (CES) data and the weather effects in the revised data. Sometimes, the initial release seems to lack a significant weather effect, but it is more pronounced upon revision. He wondered if it could be the case that more weather-affected establishments were not reporting in as timely a manner. If that were the case, then applying Boldin and Wright's methodology to the initial release of the data could offset a negative weather effect not yet in the data, thus overstating the month's employment.

Jeff Campbell wondered about the implications of the model as it relates to forecasting. He agreed with discussant Katharine Abraham, who had noted that one of the key goals of weather adjustment might be to produce data series more suitable for short-term forecasting, noting that there are alternative means of doing that. In the methodology implemented by Boldin and Wright, weather adjustment is applied to an economic series before being put into a forecasting model. Campbell wondered if a principal component of the weather adjustment could be applied to the forecasting model itself. Alan Blinder took issue with the claim that a goal of adjusting data in the first place is to make them more suitable for short-term forecasters, noting that a broader group of professionals is interested in adjusted data. Campbell also wondered if authors had any results relating to inventories.

Abraham also had suggested that another goal of adjusting data might be to produce series that are easier for ordinary people to understand. Blinder took strong issue with this suggestion as well. He noted that adjusted data are used by only a small cadre of experts, and that ordinary people live in a real world that is not seasonally or weather adjusted. For example, if an ordinary person wanted to get a job in retail, it would be easier for her to look during the Christmas season rather than in January; if she wanted to be a lifeguard, she would have an easier time finding a job in early summer rather than in October. For the small cadre of experts, however, these adjustments are really useful.

Valerie Ramey, who had dealt extensively with weather effects in her work, remarked on how useful the Boldin and Wright methodology was. She appreciated Boldin and Wright's systematic approach to removing large weather-related outliers from the data. Ramey suggested that some of the assumptions made by Boldin and Wright were appropriate, while others might need to be loosened. She agreed that the effects of lags considered by Boldin and Wright were definitely necessary, citing her experience with auto assembly plants. She recounted the great blizzard of 1978, in which assembly line workers were able to return to work, but the plants remained closed because the blizzard had prevented the delivery of parts.

Ramey took issue with the authors' model identification in two aspects. First, she questioned the assumption that weather occurring after the 12th day of the month should not have any effect on employment data. In most surveys that measure employment data, the reference period is generally the calendar week or pay period that contains the 12th day of the month. She suggested that the authors consider at least a few days after the 12th, noting that forecasts of abnormal weather in the near future may actually affect employment in the present. Steve Braun echoed Ramey's concerns, noting that if the 12th occurred on a Sunday, then as many as five extra days might need to be considered.

Ramey also questioned the assumption that unusual weather events do not have permanent effects, citing a growing environmental literature. She noted, as examples, that hurricanes hitting small islands, or Hurricane Katrina hitting New Orleans, certainly had some permanent effects on employment.

Robert Gordon commented on the relationship between GDP and payroll employment, namely productivity. Payroll data do not reflect big weather events as strongly as data for GDP, resulting in overinflated estimates of productivity and extremely high positive correlations between output and productivity. Looking at detrended levels of output productivity over the last five to twenty years, Gordon noted it is clear that there is no longer any short-term positive correlation between productivity and output. He posited that the historically low productivity growth data over the last five years reflect structural rather than cyclical changes in the economy.

Braun praised Boldin and Wright's methods as clear improvements over previous efforts to model weather adjustment. He suggested that it might be useful to apply the model not only to employment, as Boldin and Wright do, but also to measures such as work week and man-hours, which are more sensitive to the effects of weather. He noted that the exercise might be particularly interesting because the correlation between man-hours and GDP is much stronger than between employment and GDP, a concern raised earlier by Gordon.

Christopher Carroll pointed out that there may be substantively important issues on which forecasters could reach the wrong conclusions about the underlying momentum of the economy if they did not take into account some kind of weather adjustment in their forecasts. According to Boldin and Wright, the serial correlation of growth is substantially greater when adjusted for weather. Historically, the debate about whether or not the serial correlation of a variable was important centered on whether or not consumption followed a random walk. Carroll believed that evidence seemed overwhelming that there was a lot more momentum in consumption growth than was apparent originally because weather effects were adding noise to quarterly numbers. He was enthusiastic about the work put forward by Boldin and Wright, speculating that it might have real consequences for how business cycle models are calibrated.

Andrew Abel commented on the differences between the Current Population Survey (CPS) and the CES in how they measure employment. In the CES, abnormal weather that prevents people from going to work reduces the payroll employment count, since the data are based on surveys of businesses. In the CPS, however, an individual who does not go to work due to abnormal weather may still be counted as employed if weather or some other reason is given for not working. Abel wondered if Boldin and Wright were aware of how many people responded to the CPS in this way, and whether the magnitudes compared with the magnitudes that the authors calculated with their weather-adjustment model.

Adele Morris was interested in understanding regional and local labor market vulnerabilities to extreme weather events, and promoted Abraham's suggestion to have regional data adjustments in addition to the national adjustments made by Boldin and Wright. If more extreme weather events were on the horizon, it would be beneficial to have a deeper understanding of the regional and local vulnerabilities. In addition to trying to take out weather information to see what remains, she suggested that what is taken out may be extremely interesting to people who are thinking about local or regional policies to adapt to a changing climate.

Discussant Claudia Sahm brought up the question of when it is appropriate to adjust for weather, and which series to adjust. One could argue that the Census is too conservative in deciding which series to adjust, that is, it should be adjusting more series than it currently does. When adjusting a series for weather affects, it is important that the effect pulled out is in fact a weather effect, and not some other kind of effect. It is clear that employment data in construction should be adjusted because there are clear and stable patterns. However, other series might not benefit from weather adjustment, since it might not be clear that they are stable or significant enough.

Sahm believed that looking at disaggregated data could be important, for instance in analyzing the behaviors of consumer spending. In the case of motor vehicle consumption, bad weather may deter the purchase of a vehicle in one month, but the vehicle would almost certainly be purchased in the next month. On the other hand, if a consumer was prevented from going out to dinner or purchasing something for the holidays, persistent bad weather may actually prevent those transactions from ever happening in the near future.

Responding to Pingle's and Romer's question of asymmetry, Wright noted that he and Boldin looked into the issue, but did not find much evidence. The authors described an experiment in which they considered weather in the previous month and weather in the previous 2 months; they found that bad weather one month prior did lower the level of employment, but 2 months prior did not. He conceded that it is not correct to say that there is complete bounce-back immediately, but that it is not very far off.

In response to Ramey's and Braun's concerns about abnormal weather after the 12th day of the month, Wright stated that he and Boldin did test the effects of weather after the 12th day of the month, but only in a very crude way; by adding extra variables for the weather on the 13th and 14th, Boldin and Wright found that they were insignificant in the aggregate. Wright agreed with Ramey that some weather events may have a permanent impact, but argued that her framework might not be the right way to think about what the model was meant to isolate, such as a snowy winter or colder-than-normal January.

Wright responded to the question raised by Sahm regarding which series to adjust. If the ultimate interest is in aggregates rather than disaggregates, then Wright believes that statistical agencies are currently too conservative and too willing to just decide not to seasonally adjust a series at all.

Wright noted that while technology for weather adjustment is available and hopefully useful, it is not intended to replace the data. It is useful to have some way to figure out what the effects of weather are, and for statistical agencies to have some advantage in being able to do that.

(1.) In November 2013, the Survey of Professional Forecasters expected a seasonally adjusted increase of 2.5 percent in 2014Q1. The original report for the quarter was 0.1 percent, later revised to -2.1 percent, and subsequently revised to -0.9 in the 2015 annual NIPA adjustments that included revisions to the seasonal adjustment process, as discussed in section III below. With a snapback rate of 4.6 percent in the second quarter, it is highly plausible that weather played a significant role in the decline.

(2.) Even when agencies do this, their goal is just to prevent the anomalous weather from distorting seasonals, not to actually adjust the data for the effects of the weather. We discuss this in more detail later.

(3.) The HDD at a given station on a given day is defined as max (18.3 - [tau], 0), where x is the average of maximum and minimum temperatures in degrees Celsius.

(4.) An alternative measure of snowfall, used by Macroeconomic Advisers (2014), is based on a data set of daily county-level snowfall maintained by the National Centers for Environmental Information. This clearly has the advantage of greater cross-sectional granularity. However, these data only go back to 2005. Our data go much further back, allowing us to construct a longer history of snowfall effects and to measure normal snowfall from 30-year averages.

(5.) There are actually ways in which weather after the 12th could matter for CES employment that month. For example, suppose that a new hire was planning to begin work on the 13th and the 13th happens to be the last day of the pay period. She would be counted as employed in that month. But if bad weather caused the worker's start date to be delayed, then she would not be defined as employed in that month. However, we do evaluate the possibility that weather just after the 12th could affect employment for that month.

(6.) This is the number with a job, not at work, in nonagricultural industries (series LNU02036012).

(7.) Scientists agree that economic activity influences the climate, but this does not mean that it influences deviations of weather from seasonal norms.

(8.) Note also that there is a timing issue in using the CPS weather-related absences from work measure. That measure specifically refers to absence from work in the Sunday-Saturday period bracketing the 12th of the month. This lines up with the employment definition in the CES only for establishments with a Sunday-Saturday weekly pay period.

(9.) This is the value in 2010 dollars, deflated by the price deflator for construction, as discussed in Blake, Landsea, and Gibney (2011).

(10.) We estimate that every billion dollars (in 2010 dollars) in unusual hurricane damage increases employment in that month by 287 jobs, with a 95 percent confidence interval of [-919, 1,493].

(11.) If one were instead trying to model regional employment data, then it would make sense to use regional weather data. However, as discussed earlier, the national employment data receive almost all of the focus in the media and among economists, policymakers and traders in financial markets, and these data cannot be built up from state level data. In addition, there may be spillover effects of weather in one region on economic activity in other regions, such as a large local snowstorm disrupting transportation between regions. Our equations fit national employment to national weather series in a parsimonious manner to allow for these potential effects.

(12.) The weight given to the 30 days up to and including the 12th of the month is not constant--this is the average weight given to days in this window. The actual weights are shown in the lower panel of figure 1.

(13.) ARIMA stands for autoregressive integrated moving average.

(14.) North American Industry Classification System.

(15.) In specification 1 for aggregate employment data, let [??] and [??] denote pseudo-maximum likelihood estimates of a and b. We measure the unusual temperature for month t as [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] is the unusual temperature on the 12th day of month t.

(16.) Our weather data go back to 1960, allowing us to measure unusual weather by subtracting off a backward-looking 30-year average.

(17.) This is not what the BLS currently does. The BLS adjusts for specific extreme weather events before computing seasonal factors on a case-by-case basis, rather than doing so automatically as we envision.

(18.) Our SA data differ somewhat from the official SA data because we use current-vintage data and the current specification files. In contrast, the official seasonal factors in the CES are frozen as estimated five years after the data are first released. Also, we use the full sample back to 1990 for seasonal adjustment. Nevertheless, our SA and SWA data are completely comparable.

(19.) Even preventing unusual weather from affecting seasonal factors, the seasonal factors will eventually catch up to climate change because we define unusual weather relative to a rolling 30-year average.

(20.) Wright (2013) argues that the job losses in the winter of 2008-09 produced an echo effect of this sort in subsequent years. The distortionary effects of the Great Recession on seasonals are of course far bigger than the effects of any weather-related disturbances.

(21.) See page II-1 of the Federal Reserve's 2007 Greenbook here: http://www.federal reserve. gov/monetarypolicy/files/FOMC20070321gbpt220070314.pdf.

(22.) Note that there were very big snowstorms in three regions of the country in that month.

(23.) These are current-data-vintage numbers, with ordinary seasonal adjustment. The first released number for March 1993 was -22,000. The BLS employment situation write-up for that month made reference to the effects of the weather. But the BLS made no attempt to quantify the weather effect.

(24.) Means are not shown because they are close to zero by construction.

(25.) While including absences from work in specification 5 seldom makes a material difference, an exception is September 2008. In this month, the number who reported absence from work due to weather spiked to levels normally observed only in winter. We speculate that this might owe to the fact that Hurricane Ike was moving toward Texas during the survey week.

(26.) Although the BEA compiles NIPA data, seasonal adjustment is done at a highly disaggregated level, and many series are passed from other agencies to the BEA in seasonally adjusted form. As noted in Wright (2013) and Manski (2015), while the BEA used to compile not seasonally adjusted NIPA data, they stopped doing so a few years back as a cost-cutting measure. Happily, the June 2015 Survey of Current Business indicated plans to resume publication of not seasonally adjusted aggregate data, but this will still not allow researchers to replicate the seasonal adjustment process.

(27.) The inclusion of these quarterly dummies is motivated by "residual seasonality" discussed further below.

(28.) Macroeconomic Advisers (2014) find that snowfall effects on growth are followed by effects of opposite sign and roughly equal magnitude in the next quarter.

(29.) On the other hand, Gilbert and others (2015) find no statistically significant evidence of residual seasonality. The two papers are asking somewhat different questions. Gilbert and others (2015) are asking a testing question, and, while the hypothesis is not rejected, the p values are right on the borderline despite a short sample. Rudebusch, Wilson, and Mahedy (2015) are applying an estimation methodology.

MICHAEL BOLDIN

Federal Reserve Bank of Philadelphia

JONATHAN H. WRIGHT

Johns Hopkins University
Table 1. Weather Stations Used to Measure National Weather (a)

MSA                        Station                 MSA

New York          Central Park                San Antonio
Los Angeles       Los Angeles Intl. Airport   Orlando
Chicago           Chicago O'Hare Intl.        Cincinnati
                    Airport
Dallas            Dallas/Fort Worth Intl.     Cleveland
                    Airport
Philadelphia      Philadelphia Intl.          Kansas City
                    Airport
Houston           George Bush Intentl.        Las Vegas
                    Airport
Washington        Washington Dulles Intl.     Columbus
                    Airport
Miami             Miami Intl. Airport         Indianapolis
Atlanta           Hartsfield-Jackson Intl.    San Jose
                    Airport
Boston            Logan Intl. Airport         Austin
San Francisco     San Francisco Intl.         Virginia Beach
                    Airport
Detroit           Coleman A. Young Intl.      Nashville
                    Airport
Riverside         Riverside Fire Station      Providence
Phoenix           Phoenix Sky Harbor          Milwaukee
                    Intl. Airport
Seattle           Seattle-Tacoma Intl.        Jacksonville
                    Airport
Minneapolis       Minneapolis-Saint Paul      Memphis
                    Intl. Airport
San Diego         San Diego Intl. Airport     Oklahoma City
St. Louis         Lambert-St. Louis Intl.     Louisville
                    Airport
Tampa             Tampa Intl. Airport         Hartford
Baltimore         Baltimore/Washington        Richmond
                    Intl. Airport
Denver            Stapleton/Denver Intl.      New Orleans
                    Airport (b)
Pittsburgh        Pittsburgh Intl. Airport    Buffalo
Portland (Ore.)   Portland Intl. Airport      Raleigh
Charlotte         Charlotte Douglas Intl.     Birmingham
                    Airport
Sacramento        Sacramento Executive        Salt Lake City
                    Airport

MSA                          Station

New York          San Antonio Intl. Airport
Los Angeles       Orlando Intl. Airport
Chicago           Cincinnati/Northern
                    Kentucky Intl. Airport
Dallas            Cleveland Hopkins Intl.
                    Airport
Philadelphia      Kansas City Intl. Airport
Houston           McCarran Intl. Airport
Washington        Port Columbus Intl. Airport
Miami             Indianapolis Intl. Airport
Atlanta           Los Gatos
Boston            Camp Mabry
San Francisco     Norfolk Intl. Airport
Detroit           Nashville Intl. Airport
Riverside         T. F. Green Airport
Phoenix           Gen. Mitchell Intl. Airport
Seattle           Jacksonville Intl. Airport
Minneapolis       Memphis Intl. Airport
San Diego         Will Rogers World Airport
St. Louis         Louisville Intl. Airport
Tampa             Bradley Intl. Airport
Baltimore         Richmond Airport
Denver            Louis Armstrong Intl. Airport
Pittsburgh        Buffalo Niagara Intl. Airport
Portland (Ore.)   Raleigh-Durham Intl. Airport
Charlotte         Birmingham Airport
Sacramento        Salt Lake City Intl. Airport

(a.) This table lists the 50 weather stations used to construct
national average daily temperature, snowfall, and HDD data. Each
weather station corresponds to one of the 50 largest MSAs by
population in the 2010 Census.

(b.) Stapleton International Airport was replaced by Denver
International Airport in 1995.

Table 2. Estimated Effects of Unusual Weather on Aggregate Employment

                               Specification (a)

                          1             2             3

[[gamma].sub.1]         16.4 **      -18.2 **     12.6
[[gamma].sub.2]         33.6 ***     -38.6 ***    28.8 ***
[[gamma].sub.3]         23 3         -26.8 ***    16.0 **
[[gamma].sub.4]         -8.5           2.9       -18.1 *
[[gamma].sub.5]          8.7          -4.1        20.7
[[gamma].sub.6]         22.7          55.0        24.4
[[gamma].sub.7]         29.5       1,072          26.5
[[gamma].sub.8]         30.5        -183.4        26.3
[[gamma].sub.9]          6.5         -42.7         1.1
[[gamma].sub.10]        18.6 *       -25.9 *      14.0
[[gamma].sub.11]        25.2 *       -36.3 *      20.7
[[gamma].sub.12]        16.0 *       -16.4        11.0
[[gamma].sub.13]                                  -7.62 ***
[[gamma].sub.14]
log-likelihood (b)   -1968.9       -1970.1       -1965.5

                                Specification (a)

                          4             5             6

[[gamma].sub.1]         13.8 **       12.5 *        13.7 **
[[gamma].sub.2]         23.3 **       19.0 **       22.6 **
[[gamma].sub.3]         18.3 **       20 0          19.0 ***
[[gamma].sub.4]         -6.3         -15.6 *       -10.6
[[gamma].sub.5]         12.3          16.8          16.3
[[gamma].sub.6]         22.3          24.6          15.9
[[gamma].sub.7]         30.6          56.0          38.6
[[gamma].sub.8]         30.3          44.5          29.5
[[gamma].sub.9]          6.3          26.5         -11.2
[[gamma].sub.10]        16.7          23.6 **       13.5
[[gamma].sub.11]        21.5          17.0          15.4
[[gamma].sub.12]        14.7          11.5          15.4
[[gamma].sub.13]       -37.74 **     -20.36        -39.1 **
[[gamma].sub.14]                      -0.29         12.3 **
log-likelihood (b)   -1964.7       -1952.3       -1961.9

                         Specification (a)

                          7             8

[[gamma].sub.1]         23 4          12.3 *
[[gamma].sub.2]         25.4 ***      23.4 ***
[[gamma].sub.3]         27.3 ***      17 9
[[gamma].sub.4]         11.8         -10.3
[[gamma].sub.5]         28.6 **       17.0
[[gamma].sub.6]          6.4          15.0
[[gamma].sub.7]         -6.4          28.9
[[gamma].sub.8]         18.1 **       26.0
[[gamma].sub.9]         12.5          12.0
[[gamma].sub.10]        18.9 *        20.3 **
[[gamma].sub.11]        23.9          22.6 **
[[gamma].sub.12]        22.4 **       13.0
[[gamma].sub.13]       -77.63 ***    -24.73 *
[[gamma].sub.14]
log-likelihood (b)   -1957.9       -1964.2

Likelihood ratio testsc    p values   Conclusion

[H.sub.0]: No weather        0.00     Reject exclusion of temperature
  vs. Specification 1
[H.sub.1]: Specification     0.01     Reject exclusion of snow
  1 vs. Specification 3
[H.sub.0]: Specification     0.00     Reject exclusion of snow (RSI)
  1 vs. Specification 4
[H.sub.0]: Specification     0.00     Reject exclusion of absences
  4 vs. Specification 5
[H.sub.0]: Specification     0.02     Reject exclusion of precipitation
  4 vs. Specification 6
[H.sub.0]: Specification     0.00     Reject exclusion of lags
  4 vs. Specification 7
[H.sub.0]: Specification     0.59     Do not reject exclusion of 13th
  4 vs. Specification 8                 and 14th

Source: Authors' analysis, based on CES survey data.

(a.) The top panel of the table lists the parameter estimates from
fitting specifications 1 through 8 (see text) to aggregate
employment data. In all cases, [[gamma].sub.1].... [[gamma].sub.12]
refer to the coefficients on the unusual temperature variable
interacted with dummies for January to December, respectively
(except heating degree days for specification 2). Meanwhile,
[[gamma].sub.13] refers to various snow effects (defined in text)
and [[gamma].sub.14] refers to the effects of seasonally adjusted
self-reported work absences due to weather and precipitation in
specifications 5 and 6, respectively. Statistical significance
indicated at the * 10 percent, ** 5 percent, and *** 1 percent
levels. Data units are as follows: Employment is measured in
thousands, temperature is measured in degrees Celsius, snowfall
(non-RSI) is measured in millimeters, snowfall (RSI) is measured in
the scale that defines the index, precipitation is measured in
millimeters, and work absences are measured in thousands.

(b.) This row gives the log-likelihood of each model. The
specification with no weather effects at all has a log-likelihood
of-1993.7.

(c.) Bottom panel of the table reports p values from various
likelihood ratio tests comparing alternative specifications.

Table 3. Weather Effect in Monthly Payroll
Changes, Top 10 Absolute Effects (a)

Month           Weather effect

March 1993           -178
March 2010           +144
February 1996        +137
January 1996         -137
April 1993           +130
February 2010        -127
March 1999           -115
February 2007        -105
February 1999        +90
March 2007           +87

Source: Authors' analysis, based on CES survey data.

(a.) Shows the difference in monthly payroll changes (in thousands)
that are SA less those that are SWA, for the 10 months where the
effects are biggest in absolute magnitude. These are constructed by
applying either the seasonal adjustment or the seasonal-and-weather
adjustment to all 150 CES disaggregates, and then adding them up,
as described in the text. The exercise uses temperature interacted
with month dummies and RSI snowfall as weather variables
(corresponding to specification 4).

Table 4. Weather Effect in Monthly Payroll Changes, Summary
Statistics (a)

Month       Standard deviation   Minimum   Maximum

January             42            -137       53
February            58            -127       137
March               68            -178       144
April               44             -57       130
May                 24             -49       53
June                17             -36       27
July                22             29        69
August              18             -63       17
September           15             -24       31
October             20             -52       32
November            26             -40       76
December            38             -66       63
Overall             36            -178       144

Source: Authors' analysis, based on CES survey data.

(a.) Shows the standard deviation, minimum, and maximum of the
monthly payroll changes (in thousands) that are SA less those that
are SWA adjusted, broken out by month. See note to table 3.

Table 5. Autocorrelation and Standard Deviation of Month-over-Month
Changes in SA and SWA Nonfarm Payroll Data, by Sector (a)

                          Autocorrelation      Standard deviation

Sector                    SA data   SWA data   SA data   SWA data

Mining and logging         0.662     0.686       5.1       5.0
Construction               0.586     0.768      39.0       35.9
Manufacturing              0.739     0.756      50.4       50.2
Trade, transportation,     0.631     0.651      53.2       52.7
  and utilities
Information                0.625     0.645      23.2       23.0
Professional and           0.572     0.609      53.7       52.9
  business services
Leisure and hospitality    0.324     0.374      28.6       27.2
Other services             0.496     0.533       8.9       8.8
Government                 0.036     0.034      51.5       51.2
Total                      0.800     0.840      214.4     210.7

Source: Authors' analysis, based on CES survey data.

(a.) Reports the first-order autocorrelation and standard deviation
of seasonally adjusted (SA) month-over-month payroll changes (in
thousands; total and by industry) and of the corresponding
seasonally-and-weather-adjusted (SWA) data. The exercise uses
temperature interacted with month dummies and RSI snowfall as
weather variables (corresponding to specification 4).

Table 6. Weather Effect on Monthly Payroll Changes,
Top 10 Absolute Effects Using Specification 7 (a)

Month           Weather effect

March 1993           -196
January 1996         -167
February 2010        -165
March 2010           +147
May 1993             +120
May 2003             +118
February 2007        -102
February 2009        +102
April 1990           -98
May 1991             -95

Source: Authors' analysis, based on CES survey data.

(a.) Shows the monthly payroll changes (in thousands) that are SA
less those that are SWA, for the 10 months where the effects are
biggest in absolute magnitude. These are constructed by applying
either the seasonal adjustment or the seasonal-and-weather
adjustment to all 150 CES disaggregates, and then adding them up,
as described in the text. The exercise uses temperature interacted
with month dummies and RSI snowfall along with two monthly lags as
weather variables (corresponding to specification 7).

Table 7. Coefficient Estimates for Equation 3, 1990Q1-2015Q2 (a)

                    Real       Personal      Private
                     GDP      consumption   investment

[[gamma].sub.1]    0.08 ***    0.04 **       0.19
                  (0.03)      (0.02)        (0.12)
[[gamma].sub.2]    0.11 **     0.06          0.29
                  (0.05)      (0.05)        (0.28)
[[gamma].sub.3]    0.04        0.01         -0.33
                  (0.04)      (0.05)        (0.37)
[[gamma].sub.4]    0.05        0.02         -0.09
                  (0.04)      (0.04)        (0.22)
[[gamma].sub.5]    0.22       -0.04          7.28 *
                  (0.80)      (0.57)        (4.17)

                   Government
                  expenditures   Exports    Imports

[[gamma].sub.1]    0.06 *         0.26 **    0.15 *
                  (0.03)         (0.11)     (0.09)
[[gamma].sub.2]   -0.08           0.28       0.09
                  (0.06)         (0.18)     (0.13)
[[gamma].sub.3]    0.07           0.08      -0.27
                  (0.05)         (0.23)     (0.19)
[[gamma].sub.4]    0.07           0.12      -0.10
                  (0.05)         (0.14)     (0.11)
[[gamma].sub.5]   -2.83 **        0.68      -1.21
                  (1.41)         (2.90)     (2.85)

Source: Authors' analysis, based on September 2015 vintage NIPA
data.

(a.) Data units are as follows: NIPA growth rates are measured in
annualized percentage points, temperature is measured in degrees
Celsius, and snowfall is measured in millimeters. Standard errors
in parentheses. Statistical significance indicated at the * 10
percent, ** 5 percent, and *** 1 percent levels.

Table 8. Adjustments to NIPA Variable Growth Rates in 2015 (a)

                                       SA        SWA        SSWA
                          Quarter   data (b)   data (c)   data (d)

Real GDP                    Q1        0.6        1.5        3.3
                            Q2        3.9        3.1        2.6
Personal consumption        Q1        1.7        2.0        2.4
                            Q2        3.6        3.2        3.4
Private investment          Q1        8.6        9.6        12.7
                            Q2        5.0        3.1        1.0
Government expenditures     Q1        -0.1       0.6        0.9
                            Q2        2.6        2.4        1.3
Exports                     Q1        -6.0       -3.6       2.2
                            Q2        5.1        3.0        1.0
Imports                     Q1        7.1        8.4        8.4
                            Q2        3.0        2.2        1.7

Source: Authors' analysis, based on September 2015 vintage
NIPA data.

(a.) Shows the quarter-over-quarter growth rates of real GDP and
its five components in 2015Q1 and 2015Q2. All entries are in
annualized percentage points.

(b.) Refers to seasonally adjusted data published by the BLS.

(c.) Refers to seasonally-and-weather-adjusted data using the
method described in section III.

(d.) Refers to seasonally-and-weather-adjusted data, as described
in section III, with a second round of seasonal adjustment
applied using the X-13 default settings.
COPYRIGHT 2015 Brookings Institution
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2015 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:p. 253-278
Author:Boldin, Michael; Wright, Jonathan H.
Publication:Brookings Papers on Economic Activity
Date:Sep 22, 2015
Words:12184
Previous Article:Weather-adjusting economic data.
Next Article:Greek debt sustainability and official crisis lending.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters