Printer Friendly

Predicting annual energy use in buildings using short-term monitoring: the Dry-Bulb Temperature Analysis (DBTA) method.

LITERATURE REVIEW

A recently completed ASHRAE research project (RP-1404) was meant to develop and assess methods by which short-term in-situ monitoring of building energy use can be used as a workable alternative in yearlong monitoring and verification (M&V) projects. Two different approaches were explored: a coarse approach based on daily data and monitoring periods in monthly increments (addressed in this paper), and a finer approach based on using hourly data and monitoring periods, ranging from two weeks to the full year. This is the second of the two papers (see Singh et al. [2013] for the companion paper), summarizing research findings pertinent to the coarse approach. The results of the fine grid approach are described in Abushakra and Paulus (2014a, 2014b, 2014c).

There are no absolute rules for determining the minimum acceptable length of the pre-retrofit period needed for identifying inverse regression models capable of accurately predicting annual building energy use. The conventional wisdom is that a full year of energy consumption data are needed, since it will encompass the entire range of variation of both climatic conditions and the different operating modes of the building and of the HVAC system. However, in many cases, a full year of data may not be available, and one is constrained to develop predictive inverse models using shorter periods. Only a few studies (described below) have attempted to investigate the accuracy of inverse models identified from short-term monitoring in terms of predicting diurnal building energy use over the entire year. In general, such studies fell short in providing specific recommendations regarding optimum length of the monitoring period and optimum time or season of the monitoring period. Other aspects, such as which variables to monitor and the types of inverse statistical models to consider, have reached a certain maturity, as described in numerous publications; for example, ASHRAE Guideline 14-2002 (ASHRAE 2002). It is important to distinguish the class of buildings whose energy use is climate dependent, and another which is internal load/schedule dependent. This paper focuses on the former class.

The first attempts in this area started emerging in the early 1990s. The accuracy with which temperature-dependent regression models of energy use identified from short data sets e.g., less than one year) are able to predict annual energy use has been investigated by Kissock et al. (1993) using single variate regression models, by Katipamula et al. (1995a, 1995b), and by Reddy et al. (1998, 2002), using multiple regression models. All of these studies reached the same general conclusions, which are stated below.

Kissock et al. (1993) found that the average annual cooling prediction error of short-term data sets for several institutional buildings in central Texas decreased from 7.3% to 3.0%, and the average annual heating prediction error decreased from 27.5% to 12.9%, as the length of the data set increased from one month to five months. In this study, ambient dry-bulb temperature (DBT) was the only regressor considered. Katipamula et al. (1995a) stated that regression modeling can be accurate and reliable if several months of daily data (more than six months) are used to develop the model, and if such is not the case, the regression models can lead to significant errors in the prediction of the annual energy consumption. Additional regressors, such as dew-point temperature (DPT) and internal loads, were included in the model. The mean bias error (MBE) from models based on short data sets (one month) varied from -15% to 40% in their study. They also noted that one of the major sources of error with the short data sets is the insufficient range of variability in the regressor variables. They concluded that to model the energy consumption of large commercial buildings with regression models, one requires twelve months of monthly data (utility bills), at least three to six months of daily data, while (at that time) "no analysis has yet been done to determine how much data are required for hourly or hour of the day (HOD) regression models."

Abushakra (1997) studied the prediction accuracy of hourly regression energy use models identified from different lengths of monitoring periods for an office building in Montreal. This study used a total of 28 different combinations of regressors to develop 28 different stepwise multiple linear regression models. The hourly data for the whole year was divided in two seasons: heating, and cooling. For each season, each of these 28 models was developed with one-week, two-week, one-month, two-month, three-month, four-month, five-month, and six-month periods of hourly data, which resulted in a total of 448 models. Going from a one-week to a two-week period of monitoring, the MBE of the hourly predictions for the whole heating season (7.5 months) dropped significantly, from a range of [-6.55% to 6.48%] to [-2.88%to 0.77%]. The change in the coefficient of variation (CV) was not significant; from a range of [0.14% to 0.18%] to [0.9% to 0.13%]. The MBE did not change substantially, as one goes beyond a two-week period of monitored data. However, in that study, the two-week period of hourly monitored data were chosen intuitively, and were not optimized.

Tests with synthetic data found that these observations are applicable for other types of models (say, four-parameter models) as well (Reddy et al. 1998). The best predictors of both cooling and heating annual energy use were found to be models from data sets with mean temperatures close to the annual mean temperature, and with the range of variation of daily temperature values in the data set encompassing as much as the annual variation as possible. It was concluded that one-month data sets in spring and fall, when the above condition applies, are frequently better predictors of annual energy than five-month data sets from a portion of winter or summer.

Analysis methods were also proposed involving a few weeks of monitored hourly data, in conjunction with utility bills, which provide insights into internal loads and the manner in which the building is operated, in addition to capturing the widest range and the annual average of weather variables, such as DBT and humidity ratio (Abushakra et al. 1999; Abushakra 2000; Abushakra and Claridge 2000). (The companion paper [Singh et al. 2013] also addressed this specific issue and provided an additional impetus to this approach.) Subsequently, Abushakra (2000) developed an algorithm that checks for the closeness of the outdoor dry-bulb temperature and humidity ratio of any consecutive two-week period of the year to the annual averages, while simultaneously checking the amplitude of its dry-bulb temperature and humidity ratio ranges against the annual averages. The algorithm allows the ranking of all possible consecutive two-week periods of the year from best to worst. The procedure to select the "best" two-week period developed was termed the short-term monitoring for long-term prediction (SMLP) method.

To summarize, past studies have demonstrated that when a model is identified from short-term data that do not span the entire range of variation of the driving variables, erroneous/ misleading predictions can result if used outside the range. Thus, even before one attempts to develop a meaningful model from the monitored data, the range of associated climatic data should meet certain criteria: (a) only if the monitoring (in our case, the in-situ tests) is performed during the swing seasons can one expect to have good long-term load predictions, and (b) there is no way of adjusting regression models to accurately predict annual energy use once improperly identified from short data sets. These findings and the strategy suggested may, however, be unacceptable in several M&V projects since one may not have the luxury of waiting until the climatic conditions are favorable to perform the in-situ tests. Though the recommendations from earlier works are consistent with our physical understanding, there is still a need to ascertain the extent to which incremental monitoring, and the data thus collected, provide "added value or new information" to the monitored data set already obtained.

DESCRIPTION AND INSIGHTS PROVIDED BY THE DBTA ANALYSIS METHOD

The SMLP method (Abushakra 2000), although quite accurate, uses a lengthy step-by-step procedure to reach the desired output. Our present research led to a simpler and easier-to-implement method, called the dry-bulb temperature analysis (DBTA) method, which initially started from the conclusions of the literature review described above. The DBTA method suggests that one simply compute average DBT values over different consecutive months on an incremental window basis, and compare these with the annual average. The number of consecutive months needed for the corresponding average DBT to reach the annual average value of the location is the number of months needed to monitor the building so as to yield inverse models that accurately predict energy use over the whole year. Further, the DBTA offers a simple manner of ranking different start months in terms of how many additional consecutive months of monitoring (i.e., length of monitoring) are needed to obtain a data set rich enough to yield accurate predictive inverse models. Finally, the closer the averages of consecutive months are to the annual value, the better the predictive accuracy of the resulting inverse models using the corresponding monitored data set. Thus, the DBTA method allows one to ascertain the best months of the year in which to initiate in-situ monitoring in a specific location and, further, provides an indication as to the length of monitoring needed when one is constrained to start monitoring from an arbitrary month of the year.

In order to evaluate/validate the above observations, we used an industry-accepted inverse modeling formulation called the change point (CP) model (ASHRAE 2002), which is coded in the Inverse Modeling Toolkit (IMT) software (Kissock et al. 2001). There are several variants of CP modes. The following version was adopted in this study:

[E.sub.i] = a + b[([X.sub.1] - DBT).sup.+] + c[(DBT - [X.sub.1]).sup.+] + d(LTEQ) (1)

where [E.sub.i] is the daily energy use, [X.sub.1] is the x-coordinate of the change point for DBT, and LTEQ is the internal loads or lighting and equipment load. The coefficients a, b, c, and d are regression parameters. The [().sup.+] notation indicates that the value of the parenthetic term shall be set to zero when it is negative. The three energy use channels assumed in this research are whole-building electric (WBE), cooling thermal energy use (CHW), and heating thermal energy use (HW).

EVALUATION OF THE DBTA METHOD

The evaluations were done on daily data from three buildings (two synthetic and one actual) for which a full year of data were available for analysis. Table 1 summarizes the key features of buildings chosen for analysis, while more details can be found in Singh (2011) or Abushakra et al. (2013).

The monitoring start date and the end date are important factors to consider. In order to be thorough, a series of evaluations were done, starting with each month of the year and then increasing the window in monthly increments till the 12-month-window was reached. Figure 1 indicates the number of months needed to reach the annual average for the first six starting months of the year for one of the locations (namely, Chicago, IL). The plots for the other six months reveal a similar behavior and have not been included here. For example, if monitoring is initiated during the month of March, it would take three months for the window-average of daily data to reach the annual average. Subsequently, the starting months were then ranked from 1 to 12, depending on the number of months needed to reach the corresponding annual average. The results of the DBTA analysis for the three data sets analyzed (Chicago, Albuquerque, and Washington DC) are summarized in Table 2 for all three locations. Thus, if a particular month is ranked first, then starting monitoring in that month would require the least length of data for the average of the data set to reach the annual average DBT. Not surprisingly, one notes a general consistency across the three sites.

The fundamental insight highlighted in this paper is that such a ranking method (based on DBT data only) would also allow ranking the predictive accuracy of inverse models identified from the corresponding short-term data time periods. A systematic evaluation was undertaken using the DBTA ranks against predictive accuracies of the corresponding CP models generated for each starting month of the year (January to December), with each selection subsequently expanded in increments of one month to mimic different durations of monitoring. For example, for the starting month of January, the first CP model is generated using the daily data for January only, which is then subsequently increased in increments of one month, i.e., January-February, January-March, January-April, and so on, until the whole year of data are used for identifying the inverse model. The same process is repeated for each successive month of the year, taken as the start period of in-situ measurement.

[FIGURE 1 OMITTED]

To determine the accuracy of the models derived from the short data sets, the values of annual energy use predicted by models obtained from short data sets are compared to the actual energy use in the original data set. The predictive accuracy of the models is evaluated based on two statistical indices: coefficient of variation of the root mean square error or CV (%), and the normalized mean bias error or NMBE (%) (see ASHRAE Guideline 14-2002 or even in textbooks such as that by Reddy (2011) for definitions of these standard goodness-of-fit indices). Since the CV (%) is calculated as the ratio of the root-mean-squared-error (RMSE) to the mean of the dependent variable, it describes the model fit in terms of the relative sizes of the squared residuals and mean outcome values. Lower CV (%) implies smaller residuals relative to the predicted value. NMBE (%), often simply stated as "bias error," refers to how far the average statistic lies from the parameter it is estimating, i.e., the biased error that arises when estimating a quantity. Thus, low CV (%) and NMBE (%) values are indicative of a good model fit.

The CV and NMBE results for the WBE energy channel are plotted in Figures 2 and 3 for the large hotel building in Chicago, IL. Clearly, April and October (or September) are the best months to start in-situ monitoring. Similar analysis was carried out for the other two energy channels for each of the three locations. Each monitoring period was then ranked based on the duration of building monitoring needed such that models identified from the data provide predictions that are closest to those when a whole year worth of data are used. Recall that the objectives were to identify: (1) the most suitable month to install data acquisition equipment in the building, and (2) the length of monitoring needed to make accurate annual predictions.

[FIGURE 2 OMITTED]

Table 3 assembles the analysis results for Chicago, IL, for the number of months needed for various start months of the monitoring period for all three energy channels investigated. The very strong consistency between the rankings obtained from the DBTA approach (based on DBT data only) and from the inverse change-point models based on energy use data for all three buildings is clearly apparent from Figures 4 to 6 (see Singh 2011; Abushakra et al. 2013). This is a convincing demonstration of the power of the DBTA ranking approach.

[FIGURE 3 OMITTED]

[FIGURE 4 OMITTED]

The months of April and October prove to be the best to begin in-situ monitoring for building energy performance. Beginning in these months, only 2-3 months of data are enough to allow models to be identified that would predict the long-term energy performance of the buildings within acceptable accuracy levels. The NMBE (%) values of model identified from these optimal short data periods fall between the [+ or -]10% ranges of the actual energy use. For the months of March, April, and October, one month is adequate for identifying predictive inverse models whose annual predictions were within the above range. For all other periods, the NMBE (%) values lie within the [+ or -]2% accuracy levels provided; the length of monitoring is at least 4-5 months in certain cases, and extending to up to 9-10 months in certain other months (such as May-June or November-December).

INVESTIGATION INTO WAYS TO REDUCE OPTIMAL MONITORING PERIODS

The DBTA ranking method provides insight into the optimal length of monitoring required to identify a statistical model (using the CP model functions), which can predict energy use close to the actual annual building energy use. Cost and time constraints in practical situations might make it infeasible to monitor for the recommended period. Hence, a follow-up investigation was undertaken to study the loss in corresponding model predictive accuracy when the monitoring period is reduced from the DBTA-predicted optimal one.

The predictive loss in model accuracy resulting from reducing the optimal monitoring periods by two or three months has been investigated by computing the corresponding annual NMBE (%) of the daily predictions from CP models identified for such "reduced" data sets. However, the data period from which inverse models were identified was set to be no less than two to three months, even if the DBTA method suggested shorter periods. Thus, if the DBTA method yielded an optimal monitoring period of, say, seven months for a given start month, this analysis looked at the prediction results if the monitoring period was reduced to five months and four months, respectively. Figures 7 to 12 summarize the results obtained for the three buildings analyzed. Actual values of the NMBE (and CV values) can be found in Singh (2011) or Abushakra et al. (2013).

[FIGURE 5 OMITTED]

[FIGURE 6 OMITTED]

The plots clearly reveal the extent to which the annual predictive accuracy of the CP models are degraded as the length of monitoring is reduced from the optimal suggested by the DBTA method. Because WBE for the large hotel in Chicago is not strongly weather dependent, the predictions are within the acceptable limits of [+ or -]5%, even when the period of monitoring is reduced by three months from the optimal (see Figure 7). For CHW and HW energy use (Figures 8 and 9), the predictive accuracies for the reduced monitoring periods are poor, and neither a two-month reduction or a three-month reduction would be acceptable. The same conclusions can be drawn for the office building at Albuquerque, and for service hotel in Washington DC (see Figs 10 to 12).

SUMMARY AND CONCLUSIONS

This paper proposed and demonstrated, by way of three case study buildings in three geographic locations, a simple and easy-to-implement method called the dry-bulb temperature analysis (DBTA) ranking method, which can provide an indication as to the length of monitoring needed to identify accurate predictive inverse models of energy use in buildings when monitoring is initiated at any time of the year. The DBTA method suggests that one simply compute average outdoor dry-bulb temperature values over different consecutive months on an incremental window basis, and compare these with the annual average. The number of consecutive months needed for the corresponding average ambient dry-bulb temperature to reach the annual average value of the location is the number of months needed to monitor the building so as to yield accurate inverse models. In addition, this paper showed that the DBTA offers a simple manner of ranking different start months in terms of how many additional consecutive months of monitoring (i.e, length of monitoring) are needed to obtain a data set rich enough to yield accurate predictive inverse models. Further, the DBTA method allows one to ascertain the best months of the year in which to initiate in-situ monitoring in a specific location, and also provides an indication as to the length of monitoring needed when a specific start month is selected. Generally, March, April, and October were found to be the best periods for starting in-situ monitoring. If monitoring was to be initiated at these months, only two to three months of data would be adequate to identify inverse models that could predict the long-term energy performance of the buildings within acceptable accuracy levels. The DBTA method was also found to be quite accurate in capturing the information regarding the duration of monitoring required to make acceptable energy-use predictions.

[FIGURE 7 OMITTED]

[FIGURE 8 OMITTED]

[FIGURE 9 OMITTED]

[FIGURE 10 OMITTED]

[FIGURE 11 OMITTED]

[FIGURE 12 OMITTED]

NOMENCLATURE

CHW = cooling energy use, Btu/day (MJ/day)

CP = change point

CV = coefficient of variation of the root mean square error (dimensionless)

DBT = dry-bulb temperature, [degrees]F ([degrees]C)

HW = heating energy use, Btu/day (MJ/day)

IMT = inverse modeling toolkit

LTEQ = light and equipment, kWh/day

M&V = measurement and verification

NMBE = normalized mean bias error, dimensionless

SMLP = short-term model long-term prediction

WBE = whole-building electric, kWh/day

ACKNOWLEDGMENTS

This study was undertaken as part of ASHRAE RP-1404. We acknowledge useful insights and feedback provided the Project Monitoring Sub-committee members: Robert Sonderegger, Jeff Haberl, and Vern Smith.

REFERENCES

Abushakra, B. 1997. An inverse model to predict and evaluate the energy performance of large commercial and institutional buildings. Proceedings of the Fifth International IBPSA Conference, Prague, Czech Republic.

Abushakra, B., D.E. Claridge, andT.A. Reddy. 1999. Investigation on the use of short-term monitored data for long-term prediction of building energy use. Proceedings of the Renewable and Advanced Energy Systems for the 21st Century, ASME Solar Energy Conference, Hawaii.

Abushakra, B. 2000. Short-term monitoring long-term prediction of energy use in large commercial and institutional buildings. PhD dissertation, Department of Mechanical Engineering, Texas A&M University, College Station, Texas.

Abushakra, B., and D.E. Claridge. 2000. Effect of dry-bulb temperature on the prediction bias of building energy use with the short-term monitoring long-term prediction method. Proceedings of the ASME International Solar Energy Conference, Solar 2000, Madison, Wisconsin, June 17-22.

Abushakra, B., T.A. Reddy, P. Mitchell, and V. Singh. 2013. Measurement, modeling, analysis and reporting protocols for short-term M&V of whole building energy performance. Final Research Report for RP-1404, ASHRAE, Atlanta.

Abushakra, B., and M. Paulus. 2014a. An hourly hybrid multivariate change-point inverse model using short-term monitored data for annual prediction of building energy performance, Part I: Background (RP-1404). In preparation for submittal to ASHRAE HVAC&R Research Journal.

Abushakra, B., and M. Paulus. 2014b. A hourly hybrid multivariate change-point inverse model using short-term monitored data for annual prediction of building energy performance, Part II: Methodology (RP-1404). In preparation for submittal to ASHRAE HVAC&R Research Journal.

Abushakra, B., and M. Paulus M. 2014c. A hourly hybrid multivariate change-point inverse model using short-term monitored data for annual prediction of building energy performance, Part III: Analysis (RP-1404). In preparation for submittal to ASHRA HVAC&R Research Journal.

ASHRAE. 2002. ASHRAE Guideline 14-2002, Measurement of Energy and Demand Savings, Atlanta: ASHRAE.

Katipamula, S., T.A. Reddy, and D.E. Claridge. 1995a. Effect of time resolution on statistical modeling of cooling energy use in large commercial buildings. ASHRAE Transactions 101(2):3894-95.

Katipamula, S., T.A. Reddy, and D.E. Claridge. 1995b. Bias in predicting annual energy use in commercial buildings with regression models developed from short data sets. ASME/JSME/JSES International Solar Energy Conference, pp. 99-110.

Kissock, J.K., T.A. Reddy, D. Fletcher, and D.E. Claridge. 1993. The effect of short data periods on the annual prediction accuracy of temperature-dependent regression models of commercial building energy use. Proceedings ofthe ASME International Solar Energy Conference, Washington D.C., pp. 455-63.

Kissock, J.K., J.S. Haberl, and D.E. Claridge. 2001. Inverse modeling toolkit: User's guide. ASHRAE Final Report for RP-1050, ASHRAE, Atlanta.

Reddy, T.A., J.K. Kissock, and D.K. Ruch. 1998. Uncertainty in baseline regression modeling and in determination of retrofit savings. ASME Journal of Solar Energy Engineering 120:185-92.

Reddy, T.A., J.S. Elleson, and J.S. Haberl. 2002. Methodology development for determining long-term performance of cool storage systems from short-term tests. ASHRAE Transactions 108(2).

Reddy, T.A. 2011. Applied Data Analysis and Modeling for Energy Engineers and Scientists. Springer, NY.

Singh, V. 2011. Analysis methods for post-occupancy evaluation of energy use in high performance buildings using short-term monitoring. MSc thesis, The Design School, Arizona State University, Tempe, AZ.

Singh, V., T.A. Reddy, and B. Abushakra. 2013. Predicting annual energy use in buildings using short-term monitoring and utility bills: The hybrid inverse model using daily data (HIM-D). ASHRAE Transactions 119(2).

Vipul Singh

Associate Member ASHRAE

T. Agami Reddy, PhD, PE

Fellow ASHRAE

Bass Abushakra, PhD, PE

Member ASHRAE

This paper is based on findings resulting from ASHRAE Research Project RP-1404. Vipul Singh is an analyst at The Green Engineer, Concord, MA. T. Agami Reddy is an SRP professor at the Design School and the School of Sustainable Engineering and the Built Environment, Arizona State University, Tempe, AZ. Bass Abushakra is an associate professor in the Civil and Architectural Engineering and Construction Management Department, Milwaukee School of Engineering, Milwaukee, WI.
Table 1. Descriptive Summary of Buildings Chosen for Analysis

No. Building Description Area Type of
 Data *

1 Large hotel, Chicago, 619,200 [ft.sup.2] S
 IL (06/06-05/07 data) (57,525.6 [m.sup.2])
2 Office building, 17,430 [ft.sup.2] C
 Albuquerque, NM (1619.3 [m.sup.2])
 (2004 data)
3 Full-service hotel 212,000 [ft.sup.2] A
 Washington DC area (19,695 [m.sup.2])
 (2009 data)

No. Variables Regressor
 Response
 (Energy)

1 WBE, CHW, HW DBT, LTEQ

2 WBE, HW DBT, LTEQ

3 WBE DBT

* S--Synthetic data from detailed simulation program; C--Detailed
simulation model predictions calibrated against few months of
monitored data; A--Actual monitored data.

Table 2. Relative Ranking for Each Starting Month of the Year for the
Three Locations Considered as Suggested by the DBTA Method *

Start Month Chicago, IL Albuquerque,
 NM

 End Month No. of Months Ranking End Month

January July 7 7 July
February June-July 6 6 June
March May-June 3 3 May
April Apr-May 2 1 April
May March 11 11 February
June February 9 9 Jan-Feb
July January 7 7 January
August December 5 5 December
September Nov-Dec 3 3 November
October November 2 1 October
November September 11 11 September
December Aug-Sept 9 10 August

Start Month Albuquerque, NM Washington DC

 No. of Months Ranking End Month No. of Months

January 7 7 Jul-Aug 7.5
February 5 5 June 5
March 3 3 May-June 3.5
April 1 1 April 1
May 10 11 Feb-Mar 10.5
June 8.5 9 Jan-Feb 8.5
July 7 7 Jan-Feb 7.5
August 5 5 Dec-Jan 5.5
September 3 3 Nov-Dec 3.5
October 1 1 Oct-Nov 1.5
November 11 12 September 11
December 9 10 August 9

Start Month Washington DC

 Ranking

January 7
February 5
March 3
April 1
May 11
June 9
July 7
August 5
September 3
October 1
November 12
December 10

* Ranking is based on the number of months required to reach the
annual average temperature.

Table 3. Comparison of Results for DBTA Ranking Method and
CP Modeling Approach for the Large Hotel Building in
Chicago, IL *

 Monitoring Period for
 Best Annual Prediction

Start End Month
Month

 DBTA CP Model

 WBE CHW HW

January July July Aug-Sept June
February Jun-Jul July Aug-Sept June
March May-Jun June August June
April Apr-May May May May-June
May March Nov-Dec Dec-Jan December
June February January Nov-Dec December
July January January Oct-Nov January
August December January Oct-Nov December
September Nov-Dec December Nov-Dec December
October November November August Nov-Dec
November September September August July
December Aug-Sept Aug-Sept Aug-Sept July

 Ranking

Start
Month

 DBTA CP Model

 WBE CHW HW

January 7 7 8 7
February 6 5 7 5
March 3 3 3 3
April 1 1 1 1
May 11 9 8 10
June 9 10 6 8
July 7 7 5 8
August 5 5 3 5
September 3 3 2 3
October 2 1 12 1
November 11 11 11 12
December 10 11 10 10

* The numbers shown are relative ranks and loosely correspond
to the number of months needed for the corresponding model to
provide accurate annual predictions.
COPYRIGHT 2014 American Society of Heating, Refrigerating, and Air-Conditioning Engineers, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2014 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Singh, Vipul; Reddy, T. Agami; Abushakra, Bass
Publication:ASHRAE Transactions
Article Type:Report
Geographic Code:1USA
Date:Jan 1, 2014
Words:4669
Previous Article:Optimal control of energy recovery ventilators during cooling season.
Next Article:R-410A maldistribution impact on the performance of microchannel evaporator.
Topics:

Terms of use | Copyright © 2017 Farlex, Inc. | Feedback | For webmasters