Printer Friendly

Data center energy-efficiency improvement case study.

INTRODUCTION

Data center design has changed in recent years; low-energy operation has become a design requirement and there is a better understanding about how to achieve efficiency with highly redundant mechanical and electrical infrastructure topologies. Power usage effectiveness (PUE) is a metric that describes the ratio of total facility compared to IT equipment energy consumption and has become commonly adopted in the industry to benchmark efficient design and operation (Green Grid 2007). Many new data center designs have a target PUE of 1.2 or below, whereas legacy facilities might achieve a PUE of 2.0 or above. Other metrics have been developed that address different aspects of environmental impact (GreenGrid 2014). The industry has become more aware of its environmental impact and has taken steps to increase operating temperatures to facilitate reduced energy consumption (ASHRAE 2008, 2012). These changes, together with the adoption of air management best practice allow 100% free cooling or zero refrigeration in many climates (Tozer and Flucker 2012). Various initiatives promote improved data center energy efficiency, including the European Code of Conduct for Data Centre Energy Efficiency (European Commission 2015) and ENERGY STAR[R] for Data Centers (EPA 2010). A live, legacy facility may be limited in the improvements to energy consumption that can be practically achieved, however, there are opportunities for optimization.

The enterprise that is the subject of this case study operates several high-availability data centers at a number of global locations to deliver business services. Its facilities management and IT infrastructure teams were merged more than five years ago, leading to improved staff understanding, collaboration, and performance in its data center environments.

The case study will focus on the energy-improvement program at a facility with redundant systems (2N, Tier 3 topology with elements of Tier 4--uncertified) built in 2007 in Greater London, UK and a current IT load of 1800 kW in a 3500 [m.sup.2] (38,000 [ft.sup.2]) data hall, cooled by 56 computer room air handling (CRAH) units. As is typical for a facility of its age, the cooling systems were the largest energy consumer after the IT equipment (Tozer et al. 2008); hence the program focused on opportunities in this area.

The program commenced in 2009 and comprised the following stages:

1. Energy assessment and data hall air temperature survey to benchmark performance and identify opportunities for improvement

2. Implementation of air management improvements, including installation of cold-aisle semicontainment

3. Modification of CRAH units to reduce fan speeds and control supply air temperature (rather than return)

4. Increase in data hall air and chilled-water setpoints

5. Feasibility study on installation of free cooling circuits to existing chilled-water systems

6. Installation of free cooling modification using dry coolers

Incremental improvements were proposed and implemented in a phased manner, with impact closely monitored. The improvements were implemented in a live environment and thus were subject to a heavily controlled change management process to mitigate the risk of potential service interruption. Although keen to realize energy savings and operational savings, the business required the same high level of reliability to be maintained throughout. Engineering analysis was undertaken to model the predicted behavior of the cooling systems and estimate the likely outcomes of the proposed improvements, in terms of resulting efficiency and operational savings (Flucker and Tozer 2012).

ENERGY ASSESSMENT AND DATA HALL AIR TEMPERATURE SURVEY

An energy assessment was undertaken that established the facility PUE was 2.29. The energy performance was typical for a legacy facility, with a large proportion of the energy used for cooling by the data hall CRAH unit fans and in chilled-water production and distribution. The annual electricity bill was $4.7 million (USD). The energy assessment included a survey of data hall air temperatures to quantify the air management. This was combined with a series of training sessions for the operations team. The process of assessment and education identified the issues and improved the team's knowledge of the theoretical and practical implications and gave them ownership of the problem and solutions. Organizational and individual experience are important enablers for reducing risk and addressing energy waste (Cameron et al. 2013).

[FIGURE 1 OMITTED]

By taking a number of data hall temperature readings at the CRAH unit and server inlet and outlets and weighting them, it is possible to calculate the air performance (Tozer et al. 2009). Figure 1 shows a simplified schematic of data hall airflows, with the width of the lines proportional to the airflow volume and the gradation indicating temperature.

Air bypass results in additional CRAH fan energy consumption, and recirculation results in higher temperatures at the server inlet, usually offset by low setpoints, causing additional energy consumption for refrigeration. The use of these metrics allows performance to be quantified and appropriate improvements targeted.

In this case, 80% of the CRAH air was being bypassed and around 20% of server intake air was recirculated warm air. The flow availability (ratio of supply air from cooling units to demand air from servers) was 4, meaning that excessive volumes of cold air were being moved by the CRAH unit fans. Recirculation means that there is an uplift between the supply temperatures from the CRAH units and the supply temperatures to servers. All server air intake temperatures measured were between 15[degrees]C and 27[degrees]C (59.0 and 80.6[degrees]F), within the lower limit of the A1 allowable range and the upper limit of the ASHRAE recommended range (ASHRAE 2012).

The analysis also helped identify that warm air was recirculating around the end of the cold aisle and over the top of the racks from the hot aisle into the cold aisle and into the intake airstream of servers located at the top of the racks.

A number of recommendations were identified in the assessment with air management improvements a priority to enable energy savings in the mechanical systems.

IMPLEMENTATION OF AIR MANAGEMENT IMPROVEMENTS

The following measures were implemented to improve air management:

* Installation of blanking plates within cabinets

* Sealing gaps in the raised floor, including cabinet cable cut-outs and gaps around power distribution unit (PDU) bases

* Revised placement of floor grilles to where cooling required

To help separate the hot and cold airstreams, a temporary solution was installed in one aisle as proof of concept with doors at the end of the cold aisle and flame-retardant curtains above the racks.

Effective blanking within the racks demonstrated significant improvements. The installation of temporary doors (see Figure 2a) resulted in a reduction of approximately 5 K (9[degrees]R) and increasing the height of the aisle with curtains (see Figure 2b), a reduction of about 4 K (7.2[degrees]R) at localized server inlets. The success of this trial led to it being deployed throughout the data hall (see Figure 2c).

[FIGURE 2 OMITTED]

The total costs of deployment were recovered in less than a year (see following sections). The air performance metrics are used on an ongoing basis to monitor the performance of the environment and ensure that as equipment is installed and decommissioned, air management best practice is continually employed and standards are maintained.

MODIFICATION OF CRAH UNITS TO REDUCE FAN SPEEDS AND CONTROL ON SUPPLY AIR

The air management improvements meant that the air temperature delivered to servers was controlled within a narrower band and it was possible to modify the CRAH units to reduce fan speeds from 100% to 60% and consider increasing temperature setpoints.

Changing the CRAH unit control strategy from return to supply air control also helps to maintain the air supplied at the server inlet within a narrow range. With return air control, when the load distribution in the data hall is uneven, each CRAH unit will supply a different temperature to maintain a constant return air temperature (see Figure 3).

INCREASE IN DATA HALL AIR AND CHILLED-WATER SETPOINTS

The cooling unit setpoints changed from 24[degrees]C (75.2[degrees]F) on return air to 23[degrees]C (71.6[degrees]F) on supply air, with an associated increase in chilled-water temperature setpoints from 6[degrees]C/ 12[degrees]C to 17.9[degrees]C/23.9[degrees]C (42.8[degrees]F/53.6[degrees]F to 64.2[degrees]F/75.0[degrees]F). Setpoints were increased by 1[degrees]C (1.8[degrees]F) at a time over a period of several months. The server inlet temperatures were monitored and recorded before and after every stage to ensure there was no negative impact and allow a rollback if needed. The result was an increase in the overall coefficient of performance (COP) of the chilled-water systems and chiller delta T.

Ventilation flow rates were minimized in line with requirements for pressurizing the data hall rather than supplying an air volume for occupants. The ventilation extract fan was disabled as it was not required for pressurization. The operating envelope for humidity control was also widened, in line with ASHRAE recommendations moving away from 50% relative humidity (rh), to controlling dew point between 5.5[degrees]C and 15[degrees]C (41.9[degrees]F and 59[degrees]F) (see Figure 4) (ASHRAE 2012). Due the small number of hours above 15[degrees]C (59[degrees]F) dew point, dehumidification was disabled.

The above resulted in annual savings in the electricity bill of $1.3 million.

FREE-COOLING FEASIBILITY STUDY

The increased operating temperatures opened up the opportunity for free-cooling operation, as indicated in Figure 5. Reducing bypass and recirculation allowed chilled-water temperature to be increased as well as potential free cooling hours. Bypass and recirculation can be reduced further still, which would allow further temperature increases and free cooling.

Several design options were modelled, including evaporative cooling towers, dry coolers, and adiabatic sprays. The study identified that the best option for energy savings was to install four dry coolers of 1000 kW capacity (284 tons of refrigeration) with a 3.5K (6.3[degrees]R) approach temperature and operate both CRAH cooling coils simultaneously, therefore leveraging the additional heat exchange area. When both coils are used, the approach between leaving air and entering water temperature reduces to approximately half. Therefore, to achieve the same leaving air temperature, the chilled-water temperature can be increased by roughly half the approach. This results in a higher chiller COP and more hours of free cooling.

Return water from the CRAH units passes through the additional circuit, which lowers the temperature of the entering water to the chillers, reducing (sometimes eliminating) the refrigeration load. The dry coolers were added in series to allow additional hours of free-cooling operation with partial free cooling, as shown in Figure 6 (a parallel connection would mean cooling would occur either via the chillers or the dry coolers). Connecting the dry coolers to the return section of the chilled-water circuit meant little modification to the chiller controls was required. Also, connecting the dry coolers on the secondary side means they are supplied with warmer return temperatures, compared with on the primary circuit where there is some bypass flow. Again, this increases the number of possible free-cooling hours.

[FIGURE 3 OMITTED]

[FIGURE 4 OMITTED]

Operating temperatures are indicated in Figure 7. Using the log mean temperature difference (LMTD) equation and other plant selection information provided by the manufacturer, heat transfer factors were derived for the dry coolers (A.U). These were assumed to be constant, although there would be some variation with fan speed. To compensate for this, and for lower motor/inverter efficiencies at part load, the cube law speed exponent was reduced from 3 to 2.3. The number of transfer units (NTU) method (see appendix) was used to model the variation of fan speed (and thus energy consumption) with different approach temperatures to deliver the unit cooling capacity.

[FIGURE 5 OMITTED]

[FIGURE 6 OMITTED]

Q = A x U x LMTD (1)

Q = cooling, kW (Btu/h)

A = area, [m.sup.2] ([ft.sup.2])

U = heat transfer coefficient, kW/K x [m.sup.2] (Btu h x [ft.sup.2] x [degrees]F)

LMTD = log mean temperature difference, K ([degrees]R)

Adiabatic sprays were not pursued at this time due to the limited additional savings benefit versus potential maintenance risk, however, the potential benefits will be revisited once a full year of actual data is available for analysis. The energy model predicted 100% compressor-free, free-cooling operation for 63% of the year, plus an additional 33% of partial free cooling (i.e., total or partial free cooling for 96% of the year), providing an annual energy saving of around $0.75 million at full load.

INSTALLATION OF FREE-COOLING MODIFICATION USING DRY COOLERS

The operator was able to use the feasibility study to set out the business case for investment: the project is fiscally positive in year one and has an return on investment of less than three years. The dry coolers are now installed and performing free cooling (see Figure 8). The results to date suggest they are set to achieve $0.4 million of savings in the electricity bill, in addition to the savings achieved by the previous improvements. Some predicted results are shown in Figure 9.

CONCLUSION

The measured PUE range during the year to date is between 1.36 and 1.85. The estimated annual PUE is 1.49, an overall reduction of 35%. The cumulative savings are $1.7 million/year, which have reduced the annual electricity bill to $3.0 million.

A summary of results is shown in Figures 10 and 11.

In addition to the improvements described above, further savings were realized through optimization of UPS (uninterruptible power supply) system and lighting loads.

The operator continues to work on achieving additional efficiency gains such as increasing data hall supply temperatures up to 26[degrees]C (78.8[degrees]F) and chilled water to 21[degrees]C/28[degrees]C (69.8[degrees]F/82.4[degrees]F), realizing a 2% to 3% increase in free-cooling [less than or equal to]hours and reducing energy consumption of CRAH fans. Similar programs are in progress at other facilities in its global portfolio.

[FIGURE 7 OMITTED]

[FIGURE 8 OMITTED]

This case study shares experience that is transferable to other facilities and demonstrates that is it possible to make significant energy reductions in a risk-averse, high-availability environment. This was achieved through the application of engineering analysis and best practice to optimize delivery and control of cooling systems by understanding the thermodynamic principles behind them. Key to the program's success were an operational team with determination to drive change and the creation of a learning environment to support in-house expertise. With greater awareness of best practices and the underlying theory behind system behavior, operators had the confidence to challenge the status quo and make improvements. The results are not only energy and financial savings, but also improved staff morale fostered through a sense of ownership and achievement.

ACKNOWLEDGMENTS

Thanks to our anonymous client for sharing their data for this paper.

APPENDIX

The maximum possible heat transfer rate for hot fluid entering at [t.sub.hl] and cold fluid entering at [t.sub.cl] is

[q.sub.max] = [C.sub.min]([t.sub.hi] - [t.sub.ci]) (2)

where [C.sub.min] is the smaller of the hot and cold fluid capacity rates, W/K (Btu/h x [degrees]F). The actual heat transfer rate is

q = [epsilon][q.sub.max] (3)

[FIGURE 9 OMITTED]

[FIGURE 10 OMITTED]

For a given exchanger type, heat transfer effectiveness can generally be expressed as a function of the number of transfer units (NTU) and the capacity rate ratio [c.sub.r] (ASHRAE 2013):

[epsilon] = f (NTU, [c.sub.r], Flow arrangement) (4)

For a given cooling output and heat transfer rate (AU), it is possible to calculate the approach at different dry cooler fan speeds.

The fluid capacity rates for air and condenser water are calculated as follows, with

[c.sup.p] = specific heat capacity

m = mass flow rate

[rho] = density:

[C.sub.a] = [c.sub.pa] x [m.sub.a] x [[rho].sub.a] (5)

[C.sub.cw] = [c.sub.pcw] x [m.sub.cw] x [[rho].sub.cw] (6)

[C.sup.min] is the lower of these two values, which is used in Equation 2, where [t.sup.hi] = condenser water temperature in and tci = air temperature in

NTU = UA/[C.sub.min] (7)

Cr = [C.sub.min]/[C.sub.max] (8)

The heat transfer effectiveness is thus derived as

[epsilon] = f (NTU, [C.sub.r]) (9)

and cooling output Q calculated using Equation 3. This value was fixed and [t.sub.cl] determined through an iterative process for different fan speeds.

REFERENCES

ASHRAE. 2008. Environmental guidelines for datacom equipment--expanding the recommended environmental envelope. White paper. Atlanta: ASHRAE.

ASHRAE. 2012. Thermal guidelines for data processing environments, 3d ed. Atlanta: ASHRAE.

ASHRAE. 2013. ASHRAE Handbook--Fundamentals. Atlanta: ASHRAE.

Cameron, D., R. Tozer, and S. Flucker. 2013. Complexity and the human element, DatacenterDynamics Focus 3(30).

U.S. EPA. 2010. ENERGY STAR for data centers. Washington, D.C.: Environmental Protection Agency.

European Commission. 2015. Best practices for the EU code of conduct on data centres. Version 6.1.0.

Flucker, S. and R. Tozer. 2012. Minimising data centre total cost of ownership through energy efficiency analysis, Presented at the CIBSE Technical Symposium, London, UK, April 18-19.

Green Grid. 2007. Data center power efficiency metrics: PUE and DCiE. White Paper #6. Beaverton, OR: Green Grid.

Green Grid, 2014. Harmonizing global metrics for data center energy efficiency: Global taskforce reaches agreement regarding data center productivity. Beaverton, OR: Green Grid.

Tozer R, and S. Flucker. 2012. Zero refrigeration for data centres in the U.S. ASHRAE Transactions 118(2).

Tozer, R., M. Wilson, and S. Flucker. 2008. Cooling challenges for mission critical facilities. Institute of Refrigeration, Session 2007-2008.

Tozer, R, M. Salim, and C. Kurkjian. 2009. Air management metrics in data centres, ASHRAE Transactions 115(1).

Robert Tozer, PhD, CEng

Member ASHRAE

Sophia Flucker, CEng

Robert Tozer is a visiting fellow at London South Bank University, London, UK and managing director at Operational Intelligence, Ltd., Kingston upon Thames, UK. Sophia Flucker is a director at Operational Intelligence, Ltd., Kingston upon Thames, UK.
Figure 11 Before and after PUE breakdown.

Pre Initiative: PUE ~2.29

Annualised PUE 2.29

IT Load                                45.00%
UPS Transformation Losses              5.00%
Data Hall Air Movement                 11.00%
Secondary Chilled Water Pumps          1.00%
Chiller/Primary Pumps                  16.00%
Lighting, ancilliary, humidification   22.00%

Initiative to date: PUE ~1.49

Annualised PUE 1.49

IT Load                                69.00%
UPS Transformation Losses              7.50%
Data Hall Air Movement                 7.00%
Secondary Chilled Water Pumps          1.00%
Chiller/Primary Pumps                  7.50%
Lighting, ancilliary, humidification   10.00%

Note: Table made from pie chart.
COPYRIGHT 2015 American Society of Heating, Refrigerating, and Air-Conditioning Engineers, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2015 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Tozer, Robert; Flucker, Sophia
Publication:ASHRAE Transactions
Article Type:Case study
Date:Jan 1, 2015
Words:3117
Previous Article:Data center trigeneration with absorption refrigeration and economizer technologies.
Next Article:First experience in extending the reach of an energy concept through an advanced ESPC model.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters