Printer Friendly

Performance of a rack of liquid-cooled servers.

ABSTRACT

Electronics densification is continuing at an unrelenting pace at the server, rack, and facility level. With increasing facility density levels, airflow management has become a major challenge and concern. In an effort to deal with the resulting thermal management challenges, manufacturers are increasingly turning to liquid cooling as a practical solution. The majority of manufacturers have turned to liquid-cooled enclosed racks or rear-door heat exchangers, in which chilled water is delivered to the racks. Some manufacturers are now looking to cold plate cooling solutions that take the heat directly off problem components, such as the CPUs, to get it directly out of the facility.

This paper describes work done at the Pacific Northwest National Labs (PNNL) under a Department of Energy-funded program titled "Energy Smart Data Center." An 8.2 kW (27,980 Btu/h) rack of HP rx2600 2U servers has been converted from air-cooling to liquid spray cooling (CPUs only). The rack has been integrated into PNNL's main cluster and subjected to a suite of acceptance tests. Under the testing, the spray-cooled CPUs ran an average of 10[degrees]C (18[degrees]F) cooler than the air-cooled CPUs. Other peripheral devices, such as the memory DIMMs, ran an average of 8[degrees]C (14.4[degrees]F) cooler, and the power pod board was measured at 15[degrees]C (27[degrees]F) cooler. Since installation in July 2005, the rack has been undergoing a one-year uptime and reliability investigation. As part of the investigation, the rack has been subjected to monthly robustness testing and ongoing performance evaluation while running applications such as High Performance Linpack, parts of the NASA NPB-2 Benchmark Suite, and NW Chem. The rack has undergone three months' worth of robustness testing with no major events. Including the robustness testing, the rack uptime is at 95.54% over 299 days. While undergoing application testing, no computational performance differences have been observed between the liquid-cooled and standard air-cooled racks. A small-scale (8-10 racks) spray-cooled Energy Smart Data Center is now being designed as a final step to demonstrate the feasibility of scaling liquid cooling at the single rack up to an entire facility.

INTRODUCTION

Data center heat flux has been of concern for a number of years, and there appears to be little relief in sight for the near future. ASHRAE has published an update to the classic Uptime power trend chart, which shows equipment heat loads continuing to increase (ASHRAE 2005, in particular Figure 3.10). Equipment heat flux is calculated by dividing the rack power by the rack footprint (i.e., width x depth). Schmidt (2005) shows actual equipment heat loads superimposed on the updated ASHRAE chart. The actual loads provide an increased level of confidence in the original Uptime chart, which points to the extreme heat flux densities that are starting to appear in data centers today.

It is projected that individual server power dissipation will reach 800 W (2,730 Btu/h) in a 1U vertical space around 2007 and that this will last through the end of the product lifespan in approximately 2013 (Scientific Computing 2006). For a 42U rack containing 36 x 1U servers, the rack heat load will reach 28.8 kW (98,270 Btu/h). Rasmussen (2005) shows the typical rack heat load at 1.7 kW (5,801 Btu/h) (2003 data), with the maximum 1U server rack heat load at approximately 16 kW (54,590 Btu/h) and the maximum blade server heat load at approximately 20 kW (68,240 Btu/h). Scaling Rasmussen's data to a 30 kW (102,400 Btu/h) rack of blade servers indicates that approximately 3000 cfm (1.4 [m.sup.3]/s) of airflow are needed to air cool the rack. Using Rasmussen's data to scale up to a 50 kW (170,600 Btu/h) rack shows that approximately 5000 cfm (2.4 [m.sup.3]/s) of airflow is needed to air-cool such a rack. This trend is unsustainable, and datacom equipment manufacturers are aware of this. Alternative cooling solutions are now being heavily investigated, and the vast majority of them involve liquid cooling.

Several vendors use chilled facility water in their racks to cool air delivered to the rack with an air-to-liquid heat exchanger. One vendor places an air-to-liquid heat exchanger on the side of a rack of servers (Vendor #1 2006). The rack's hot exhaust air is directed into the heat exchanger, is cooled, and then blown out the front, ready for reuse by the servers. Product literature states that a maximum capacity of 30 kW (102,400 Btu/h) can be provided when 7.2[degrees]C (44.9[degrees]F) water is delivered at 68.1 L/min (18.1 gpm). This vendor touts a key advantage as being the ability of this solution to be used with and without raised floors.

A second vendor delivers chilled refrigerant to evaporator units mounted to the tops of racks and in the data center's ceilings (see Vendor #2 [2005]). This solution is used in conjunction with raised floors and supplements the chilled-air supply from the floor tiles. The product literature states that this solution can provide a maximum additional cooling capacity of 16 kW (54,590 Btu/h) per ceiling-mounted unit and 8 kW (27,300 Btu/h) per rack-mounted unit. The number of units installed and their placement depend on the heat load of the racks.

A third vendor takes air conditioning one step further by placing the servers in a totally enclosed rack and delivering chilled air to the front of the servers. The product literature states that a maximum capacity of 30 kW (102,400 Btu/h) can be provided when 7[degrees]C (44.6[degrees]F) water is delivered at 75.7 L/min (20 gpm).

A fourth vendor uses an air-to-liquid heat exchanger mounted to the rear (exhaust side) of a rack of servers. In one implementation, a central distribution unit (CDU) is used to condition facility water, via a mixing valve, to above a facility's dew point, which it then distributes to multiple rear-door heat exchangers. The heat exchanger does not use additional fans and relies on the server fans to push the exhaust air through the heat exchanger. The manufacturer states that, on average, a typical heat exchanger mounted in the rear door can provide a cooling capacity of 15 kW (51,180 Btu/h) when 18[degrees]C (64.4[degrees]F) water is provided at 37.8 L/min (10 gpm).

The foregoing provides a brief description of several vendor solutions that do not integrate their cooling solutions into the servers themselves. The authors are particularly interested in liquid-cooling solutions that do, in fact, integrate into the servers, as this is the focus of the paper. There are numerous published studies that focus on the performance of cold plate solutions. Patterson et al. (2004) conducted a numerical study of heat transfer in stacked microchannels (application to high-end electronics cooling). A number of different internal (water) flow arrangements were studied with the objective of minimizing wall temperature (and, in turn, microprocessor temperature). Copeland (2005) provides a review of liquid-cooled (emphasis on water) cold plate technology for implementation in high-density servers (1U) and blades. A variety of materials, internal geometries, and flow conditions were studied. There are also numerous vendors that offer cold-plate solutions. Two vendors, in collaboration with the University of Paderborn in Germany, have assembled and are testing at least one rack of servers that are cooled with water-cooled cold plates.

This study focuses on the implementation, and associated results, of a rack of liquid-cooled servers. In this implementation, part of the overall cooling solution has been integrated into the servers themselves. To achieve this, the active fan heat sinks have been removed from the individual servers in a full rack of HP rx2600 2U servers. Upon removal of the fan heat sinks, the servers were retrofitted with spray module kits that include two spray-cooled cold plates per kit. The kits, in turn, are attached to a thermal management unit that conditions the coolant (a dielectric) to above facility dew point. The coolant is then delivered to the cold plates via a rack-mounted supply manifold. Details on the implementation and associated results are provided in the ensuing sections.

LIQUID COOLING INVESTIGATION AT PNNL

PNNL and the authors have been investigating liquid cooling of servers under a Department of Energy program titled "Energy Smart Data Center." The overall objectives of the program are to demonstrate

a. the energy efficiency of the cooling solution (via coefficient of performance at the facility level),

b. an attractive total cost of ownership, and

c. scalability to a fully liquid-cooled data center.

Additional detail is provided in Cader et al. (2006). The focus of this paper is to provide information on the feasibility of scaling the cooling solution up to a full facility. The primary results in support of this part of the effort are presented in the sections covering robustness testing, system uptime, and application testing.

While the current supercomputer at PNNL is maintained within manufacturers' specifications, PNNL will not be able to upgrade the facility to higher power racks without significant and costly changes to their facility. With the current investigation, they are interested in seeing what cost-effective densification gains can be made by implementing liquid cooling within the current footprint.

Hardware Description

A 42U rack of 18 HP Integra rx2600 2U servers has been the subject of this investigation. The rack is identical to all of the air-cooled racks deployed in PNNL's Molecular Sciences Computing Facility supercomputer. The servers are configured with dual Intel IA-64 Itanium 2 processors running at a clock speed of 1.5 GHz and using 12 GB of RAM. The Itanium 2 processor uses the Intel Madison core, which has a thermal design power (TDP) of 107 W (365 Btu/h) and a maximum case temperature of 83[degrees]C (181.4[degrees]F). The rack also houses two HP Pro-Curve GB Ethernet switches, a Cyclades serial/IP switch, and a Keithley Integra 2750 data acquisition system.

The liquid-cooling (spray-cooling) system consists of a spray module kit (SMK) for each server, a rack manifold (incorporating both supply and return manifolds), and a thermal management unit (TMU). SMK design, operation, and performance capabilities are detailed in Schwarzkopf et al. (2005). An air-cooled server is converted by removing the fan heat sinks and replacing them with an SMK. All converted servers are then attached to the rack manifold, which, in turn, is attached to the TMU. Figure 1 is a picture of an individual spray module, a converted server, and the fully converted rack.

The TMU uses a liquid-to-liquid heat exchanger (see Figure 2). It currently receives 7[degrees]C (44.6[degrees]F) chilled water from the facility, but is designed to eventually receive cooling tower water (to increase the facility's energy efficiency further). Commercial-grade tubing and connectors are used to make the connection to the facility water, but PNNL is still extremely concerned about water leaks for this test/alpha system. As a result, a water leak detection system has been incorporated into the system design. The water leak detection system consists of a control box, external power supply, output relay to the facility, and a rope sensor. The rope sensor is wrapped around connections, tubing, and on the floor in the zones of concern. The sensor is tied in to the control box. In turn, the control box is tied to the fail-safe shutoff valves and the Environmental Molecular Sciences Laboratory (EMSL) power operator. In the event of a leak, the rope sensor triggers the control box, which then opens the output relay. The failsafe shutoff valves close, isolating the system from facility water. Opening the output relay also signals the EMSL facility operator that the water detection system has been triggered. If the solenoid valves to not respond appropriately, mechanical ball valves attached to the facility's hard plumbing are manually closed to isolate the rack from the facility water.

As mentioned, the facility provides 7[degrees]C (44.6[degrees]F) chilled water to the TMU. The proportional valve is used to throttle the water flow rate back, because 7[degrees]C (44.6[degrees]F) chilled water is cooler than what is needed by the cooling system--the cooling system supplies coolant at a minimum of 20[degrees]C (68[degrees]F) but is designed to deliver coolant over 30[degrees]C (86[degrees]F) (when cooling tower water is provided to the TMU). The TMU also has the ability to shut all the servers down if operating conditions are compromised.

The coolant used in the system is a fluorinert (dielectric) made by 3M Corporation (PF5050). PF5050 has the following approximate properties given at 101.3 kPa (1 atm) and room temperature: boiling point of 30[degrees]C (86[degrees]F), specific heat of 1,048 J/kg x K (0.25 Btu/[lb.sub.m] x [degrees]F), viscosity of 4.69 x [10.sup.-4] kg/m x s (3.152 x [10.sup.-4] lb/ft x s), thermal conductivity of 0.056 W/m x K (0.032 Btu/h x ft x [degrees]F), and a latent heat of vaporization of 102.9 kJ/kg (44.2 Btu/lb).

[FIGURE 1 OMITTED]

[FIGURE 2 OMITTED]

Acceptance Test Methodology

Extensive testing was conducted before gaining final acceptance from PNNL. The first step involved baseline testing of the air-cooled servers at PNNL's Molecular Science Computing Facility (MSCF) before any hardware modifications. CPU temperatures and power draw, various peripheral component temperatures, server power draw, and full rack power draw were measured while fully exercising the servers using a Linux-based spin loop program called BurnI2. The system was then shipped to the authors' test facility, where the air-baseline tests were repeated in the facility's integration lab (slightly different conditions than those found at PNNL). The air-cooling results obtained at PNNL were compared to those obtained in the integration lab, and no anomalies were found. The air-cooled servers were then converted to spray cooling, and the tests conducted on the air-cooled rack were repeated for the spray-cooled rack. This comparison was important because the operating environment was held constant while testing in the integration lab, and the air-baseline and liquid-cooled results could be directly compared. The system was then shipped back to PNNL, installed in the MSCF, and the tests were conducted again for the spray-cooled system.

Three servers were selected for thorough instrumentation during the testing. The servers were selected based on their power draw (they dissipated the most power). These servers were instrumented to measure 27 different peripheral component temperatures, server power draw, and CPU power draw. Peripheral components included several of the memory DIMMs, the North Bridge, several Ethernet-related components, both hard drives, and seven points of load power conversion transistors. CPU temperatures and air inlet and exhaust temperatures were gathered on the remaining 15 servers.

During testing at the authors' facility, the servers were fully exercised using BurnI2. While at PNNL, the servers were fully exercised using BurnI2, but High Performance Linpack (HPL) was also used for some of the testing. BurnI2 is a calculation loop program that only loads the processor. Linpack is a typical high-performance computing (HPC) benchmarking program, which loads the processor while also measuring the processors' computing performance. The overall acceptance test suite used both "static" and "dynamic" loading of the servers. Under static loading, the test program was launched at the outset and allowed to run at maximum performance for a prescribed period of time. Under dynamic loading, a script was written by PNNL that directed BurnI2 to start and stop at prescribed times on combinations of all 18 servers. The dynamic loading was designed to loosely simulate an actual job running at PNNL. The dynamic testing provided insights into processor temperature ramp rates and the ability of the liquid-cooling system to respond to dynamic loads. Results for the static tests run at the authors' facility and the dynamic tests conducted on site at PNNL are presented in the next section.

Test Results

As described in the "Acceptance Test Methodology" section, extensive characterization testing of the liquid-cooled system was conducted at the authors' facility prior to shipping to PNNL. Extensive on-site acceptance and application testing was conducted at PNNL after rack installation. This section presents results from both locations.

Table 1 presents results for testing at the authors' facility. The table compares CPU temperatures only for the air- and spray-cooled servers for 3 of the 18 servers in the rack. The CPUs were maintained at approximately 72[degrees]C (161.6[degrees]F) for the air-cooled servers, and approximately 61[degrees]C (141.8[degrees]F) for the spray-cooled servers. Figure 3 presents a histogram of the temperature differences (i.e., temperature of air-cooled CPUs less temperature of spray-cooled CPUs) for all CPUs in the full rack at the authors' test facility. Accounting for all the CPUs, the spray-cooled CPUs run from 8[degrees]C to 19[degrees]C (14.4[degrees]F to 16.2[degrees]F) cooler than the air-cooled CPUs.

[FIGURE 3A OMITTED]

Table 2 compares the air-cooled peripheral device temperatures to the spray-cooled device temperatures. The data were acquired during testing at the authors' test facility. Because the air-cooled devices were tested at an ambient temperature 4.4[degrees]C (7.9[degrees]F) higher than for comparable testing for the spray-cooled servers, all air-cooled device temperatures were corrected and adjusted downward by this amount before comparing to the devices in the spray-cooled servers. These differences are reported in the "Adjusted Difference" column of Table 2.

Table 2 shows all the devices except the hard drives in the spray-cooled servers running cooler than those in the air-cooled servers. For example, the memory ran from 6.5[degrees]C to 8[degrees]C (11.7[degrees]F to 14.4[degrees]F) cooler, while the Ethernet chips ran 8[degrees]C to 10[degrees]C (14.4[degrees]F to 18.0[degrees]F) cooler, and the Northbridge chips ran 3[degrees]C to 7[degrees]C (5.4[degrees]F to 12.6[degrees]F) cooler. Figure 4 presents a histogram of temperature differences for the FETs (i.e., temperature of air-cooled FETs less temperature of FETs in spray-cooled servers), from testing at the authors' facility. The air-cooled FETs were 13[degrees]C to 21[degrees]C (23.4[degrees]F to 37.8[degrees]F) hotter than the FETs in the liquid-cooled servers. The dta shows that the hottest FETs were located immediately downstream of the CPUs.

The on-site (at PNNL) acceptance testing for the liquid-cooled rack consisted primarily of "static" and "dynamic" burn-in tests. As described in the "Acceptance Test Methodology" section, the static testing was conducted by subjecting the rack to maximum load using the BurnI2 program. Dynamic testing used a script that executed and cancelled the BurnI2 program at prescribed time intervals to vary the rack computing load. Results for the dynamic burn-in are shown in Figures 5 and 6 and Table 3.

[FIGURE 3B OMITTED]

Figure 5 and Table 3 present the results for an approximate 68-hour continuous burn-in test. Although a large amount of data were collected for the cooling system and servers, only the reservoir pressure, coolant supply temperature ("HX out"), and pump discharge pressure were selected for discussion. The pump discharge pressure was set to maintain approximately 137.9 kPad (20 psid) across the full system and held steady throughout the testing. The reservoir pressure and coolant supply temperature tracked the heat load absorbed by the coolant (from the CPUs). Because the dynamic script varied the workload on the servers, the heat dissipated by the CPUs varied. As a result of the processor heat dissipation, the reservoir pressure and coolant supply temperature cycled within the demonstrated envelopes. The CPU temperatures for all the servers also responded to the variation in coolant supply temperature. Figure 6 shows the CPU temperatures recorded for node (server) M1014 during the dynamic burn-in. CPU0 cycled at a maximum of 14[degrees]C (25.2[degrees]F) for the duration of the testing, while CPU1 cycled at a maximum of 13[degrees]C (23.4[degrees]F). No issues were encountered with the cooling system, or any of the servers, for the duration of the testing.

ROBUSTNESS TESTING

PNNL staff is aware that facility cooling will become increasingly difficult as rack heat loads continue to increase. One of the options available involves liquid cooling of their servers. In order for laboratory staff to make a sound decision on the facility-level implementation of the liquid-cooling solution presented in this paper, they have requested additional data on the robustness of the overall solution. The authors designed a series of robustness tests in response.

[FIGURE 4A OMITTED]

The robustness testing was designed to subject the cooling solution to an extraordinary level of stress on a monthly basis and to expose inherent weaknesses in the system. The results from the testing will be used to improve the system design as it moves from the current test/alpha state to beta and eventually productization. The test methodology is presented in the "Robustness Testing Methodology" section, with brief results presented in "Robustness Test Results."

Robustness Testing Methodology

The test methodology requires the computer and cooling systems to operate under the maximum operating conditions. Electrical and mechanical components are stressed under these conditions with the intent of causing failures. The test focuses on the cooling system's pumps, pump controllers, power supplies, electrical cabling, coolant plumbing, facility water plumbing, and fail-safe protection systems. The computing system is exercised at full power, which stresses the computers as well as the cooling system.

The primary operational test focuses on how the system responds to higher-than-normal current draws and pumping pressures. These conditions are generated by increasing the discharge pressure of the TMU's pump(s). This stresses the pump, electrical cabling, pump controller, system pressure transducers, system's power supply, and system plumbing due to higher operating pressure. A secondary test includes cycling the pump speed. This stresses the system by introducing in-rush current and mechanically stressing the system's mechanical power switch.

Of great concern is the system's ability to detect and respond to a water leak. The system uses facility water in its liquid-to-liquid heat exchanger, so it must respond 100% of the time to protect the computing system and the facility. The robustness test for the water detection system includes testing the water detection sensor, controller, fail-safe shutoff valves, the TMU, and the MSCF facility operator's response to the leak. The water detection sensor and controller are triggered using a damp rag, which then triggers the fail-safe shutoff valves to close. An alarm is also sent to the MSCF facility operator, a page is sent to the on-site technician, and an e-mail alert is sent to the program team. A sequence of events then take place in response to the leak detection alert.

[FIGURE 4B OMITTED]

In addition to testing the mechanical and electrical systems, a battery of tests were run on the TMU control system to exercise the firmware and verify correct operation. This testing mainly consisted of executing commands and verifying the outputs from the firmware in response to these commands.

Robustness Test Results

Selected results from the robustness testing are shown in Table 4. The robustness testing has been executed three times to date. The intent has been to expose weaknesses in the cooling system design. Table 4 indicates that, since the first round of testing, the system has performed without any major issues.

UPTIME

A key aspect of any high-performance computer is its availability to users for running production jobs. This is typically referred to as "uptime." The uptime is being tracked for the liquid-cooled rack described in the "Hardware Description" section for a full year. Uptime is defined for this study in the "Uptime Definition" section, and the results to date (as of April 25, 2006, or 299 days into the study) are presented in the "Uptime Results" section.

[FIGURE 5A OMITTED]

[FIGURE 5B OMITTED]

Uptime Definition

For this study, uptime is determined in two different ways. The first calculation considers the total number of hours of availability for all 36 CPUs in the spray-cooled rack. For this case, uptime is calculated by considering whether each individual CPU is either operational or being serviced/repaired, and then comparing the operational time against the total amount of time it could have been available. The second calculation considers the total number of hours during which the cooling system is online and providing sufficient cooling capacity to the 18 servers in the rack. This is referred to as the "cooling system uptime." In this case, uptime is calculated by comparing the time the cooling system was actually operational to the total amount of time it could have been available. This second calculation ignores the amount of time it took to bring up individual servers that experienced unique events. For example, a particular event involved replacing a power pod in an individual server. Although the cooling system was available to provide cooling during this service, the CPUs in that server were not available during this time.

An average uptime value in the entire high-performance computing market is 92%, with some high-availability mission critical systems achieving 95% or even 98% uptime. Typically, these higher values are achieved by having an increased number of spares on hand, contracting with the hardware vendor for higher-level service agreements, and so forth. However, even these values are relatively low compared to other markets such as telecommunications, a result of the fact that HPC systems are typically customized solutions using leading-edge components in specialized applications that can put greater stress on the overall system.

It is worth noting that PNNL typically performs system maintenance once a month, at which time the system may be brought down for routine service elements. Because this is typical operation for the system and is planned for, these times were not considered downtime for the purposes of the calculations.

Uptime Results

Figure 7 presents a graphical representation of the uptime for the rack. Between July 1, 2005, and April 14, 2006, rack uptime based upon CPU availability was 95.54%. For the same period, cooling system uptime was 96.69%.

The spray-cooled rack installed at PNNL is an engineering prototype. For this reason, when problems occurred, the system was not immediately repaired with spares on hand, but rather time was taken to gather data and inspect the system in an attempt to learn valuable information. In a similar vein, robustness testing was designed to put an extraordinary level of stress on the cooling system, to provide data to help improve system design. One iteration of the robustness testing on March 10, 2006, did, in fact, cause a four-day (including weekend) shutdown of the rack from anomalies involving the temperature sensors. Likewise, several small shutdowns were incurred as a result of tests that were undertaken to gather data but that were unrelated to actual system performance. The 95%+ uptime results achieved to date on the prototype system are encouraging and suggest that a highly reliable liquid-cooling solution is achievable for production systems.

APPLICATION TESTING

As described in the "Acceptance Test Methodology" section, the liquid-cooled rack was burned in using BurnI2 and Linpack. Although the selected applications do an adequate job of thermally stressing the CPUs, they do not test the majority of PNNL's cluster infrastructure. The liquid-cooled rack of servers has also yet to execute a production job at PNNL. If liquid-cooled servers are to have a chance at scale-up to a full facility, the subject rack should demonstrate the ability to execute a production job in a way that makes it "look" like a standard air-cooled rack currently used by PNNL. The first few steps toward this end have been taken by PNNL. The "Description of Applications" section describes key applications that are currently being run on the rack to investigate its performance under conditions that mimic a production job. The "Results from Application Testing" section presents key results from the running of the applications.

Description of Applications

The rack currently runs a bioremediation problem within the framework of a code called NWChem, which was developed at PNNL (Kendall et al. 2000). This computational chemistry application is based on the Global Arrays and MPI programming model. The goal is to provide a quantitative understanding, at the molecular level, of the thermodynamics involved in the uptake of recalcitrant ions by subsurface microbes. Running this application so far has not exposed any computational behavior that is different from the rest of the high-performance computing cluster.

[FIGURE 6A OMITTED]

[FIGURE 6B OMITTED]

[FIGURE 7 OMITTED]

During idling periods, the 18 nodes are exercised at regular intervals by running the Linpack benchmark (Dongarra et al. 2003), which is a set of routines to solve linear equations. In addition, a subset of the serial (NPB2.3-serial) and parallel (NPB2.4-MPI) versions of the Numerical Aerodynamic Simulation (NAS) Parallel Benchmark suite by NASA (Bailey et al. 1991) were run on varying sizes of input decks (S, A, B, C) on a varying number of nodes. The maximum number of nodes was 16, thus excluding the bottom (head) and top nodes in the rack from the computation. The kernels were

* BT: block tridiagonal solver

* CG: conjugate gradient solver

* EP: embarrassingly parallel kernel representing Monte Carlo simulations

* FT: three-dimensional FFT PDE solver

* IS: integer sort

* LU: lower-upper diagonal solver with symmetric successive over-relaxation

* MG: multigrid three-dimensional Poisson PDE solver

* SP: scalar pentadiagonal solver

The results are presented in the following section.

Results from Application Testing

Note that the results presented in this section do not represent any novel findings. As mentioned above, the goal of running the selected applications was mainly to verify that there were no performance anomalies specific to the liquid-cooled rack. This information is critical to PNNL as they consider scaling to a fully liquid-cooled cluster.

As a baseline, Figure 8 depicts the performance numbers in millions of operations per second (Mop/s) for NPB2.3, compiled with the Intel Itanium compiler version 8.1 with--O3 optimizations running on 16 nodes. These numbers are in accordance with the expectations for a conventional air-cooled machine.

One point of the architecture space is represented by the speed-up curves shown in Figure 9 for the embarrassingly parallel NPB case. The kernel scales strongly and independently of the size of the input deck.

Finally, the speed-up curve is shown in Figure 10 for the LU case. In addition to large absolute numbers, there is strong scaling as in the EP case. Weak scaling is seen at the transition from the small S input deck to the A, B, and C decks.

Overall, from a computational performance aspect, there is no observable difference from an air-cooled rack. These measurements agree with previous observations indicating that there is no performance throttling of the Itanium Madison CPUs in the rx2600 servers for either the air-cooled or liquid-cooled servers.

SCALE-UP TO A LIQUID-COOLED DATA CENTER

A major objective of the DOE Energy Smart Data Center has been to demonstrate scalability of the liquid-cooling solution under investigation to a full data center. Two major milestones to achieving a full liquid-cooled data center involve (1) achieving vendor adoption and (2) convincing data center operators to purchase and install liquid-cooled servers. The authors have been working with PNNL to accomplish item 2, and with several vendors on item 1.

[FIGURE 8 OMITTED]

Several steps have been taken to provide PNNL with the data necessary for the lab to make a sound decision before committing liquid cooling. The first step involved extensive testing at the authors' test facility under a wide variety of conditions, as well as extensive testing at PNNL before gaining the lab's acceptance. Once installed, the liquid-cooled rack was run for a number of months before subjecting it to a monthly robustness test routine in the second step. The robustness test routine thermally, mechanically, and electrically stresses cooling system components, and is designed to cause failure of any weak parts of the system. Robustness testing also stresses the cooling system's firmware. As shown in the section on "Uptime Results," the liquid-cooled rack has had a 95.54% uptime to date, which includes three months worth of robustness testing. This is an impressive accomplishment for a test/alpha system. The third step was for PNNL to run applications of interest on the liquid-cooled rack. Application testing started approximately 1.5 months ago, and, as reported in "Results from Application Testing," PNNL has found no differences in performance of the liquid-cooled rack compared to one of their standard air-cooled racks.

An important aspect of the Energy Smart Data Center program was to convince PNNL and DOE that the technology can raise PNNL's facility COP. An in-depth analysis of the facility's COP was undertaken using data from the testing previously described (Cader et al. 2006) and accompanying thermophysical analysis. The analysis predicted a 22% increase in facility COP if PNNL maintained current computational capacity but switched from air-cooled 2U servers to liquid-cooled 1U servers. It was assumed that heat, transferred to the liquid-cooled portion of the 1U servers, would be rejected directly to cooling tower water. To summarize, this analysis indicated that by addressing cooling issues at the server and rack level, challenges at the facility and building level can be reduced. This again is evident in the calculated increase in facility COP.

[FIGURE 9 OMITTED]

[FIGURE 10 OMITTED]

The next major step involves scaling the liquid-cooling solution up to a miniature data center. PNNL and the authors are currently engaged in the design of a small-scale spray-cooled Energy Smart Data Center. The facility is being designed to house a maximum of 16 spray-cooled racks in the first phase. The facility and several racks will be heavily instrumented to enable massive data collection. The data will be used to calculate facility and rack coefficient of performance (COP) in real time. The data will also be used to determine the total cost of ownership for a complete spray-cooled Energy Smart Data Center. The findings of the study will be used by PNNL to assist them in deciding when and whether to use liquid cooling. The findings from the scale-up exercise to a spray-cooled Energy Smart Data Center will be reported in a future paper.

CONCLUDING REMARKS

A full rack of air-cooled HP rx2600 2U servers has been converted from air-cooling to liquid-spray cooling. The servers were fully characterized under air cooling before conversion, and under spray cooling after conversion. Since installation of the spray-cooled servers in PNNL's facility, the full liquid-cooled rack has undergone 299 days of uptime and reliability testing. The liquid-cooled rack has also been subjected to monthly robustness testing and regular application testing.

Under the characterization testing, the air-cooled and liquid-cooled servers were fully exercised using BurnI2. When fully exercised, the air-cooled CPUs ran 8[degrees]C to 19[degrees]C (14.4[degrees]F to 34.2[degrees]F) hotter than the equivalent spray-cooled CPUs. In addition, the air-cooled FETs ran 2[degrees]C to 21[degrees]C (3.6[degrees]F to 37.8[degrees]F) hotter, the DIMMs ran 7[degrees]C to 8[degrees]C (12.6[degrees]F to 14.4[degrees]F) hotter, and the Northbridge chips ran 3[degrees]C to 7[degrees]C (5.4[degrees]F to 12.6[degrees]F) hotter. To date, the liquid-cooled rack has undergone three months' worth of robustness testing without any major events. Since the installation of the liquid-cooled rack, and including the robustness testing, the test/alpha rack has achieved an uptime of 95.54% (over 299 days). Application testing has revealed no difference in computational performance between the air-cooled racks and the spray-cooled rack. While the current supercomputer at PNNL is maintained within manufacturer's specifications, PNNL is unable to increase facility densification without significant and costly facility changes. The scale-up exercise conducted under this program shows that facility density can be significantly increased by using liquid-cooled 1U servers instead of the current air-cooled 2U servers (the facility cannot cool a fully loaded rack of 1U servers).

The results to date from testing of the test/alpha liquid-cooled rack are encouraging and point to the feasibility of scaling the liquid-cooling solution up to an entire facility. PNNL and the authors are currently engaged in the design of a small-scale (8-10 racks) spray-cooled Energy Smart Data Center. Results from this next part of the study will be reported in future papers.

ACKNOWLEDGMENTS

The contributions of Tom Seim (PNNL), Kevin Fox (PNNL), Ryan Mooney (PNNL), Blanche Wood (PNNL), Robert Ressa (ISR), and Dave Locklear (consultant to ISR) are greatly appreciated.

REFERENCES

ASHRAE. 2005. Datacom Equipment Power Trends and Cooling Applications. Atlanta: American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc.

Bailey, D.H., E. Barszca, J.T. Barton, et al. 1991. The NAS parallel benchmarks. International Journal of Supercomputer Applications 5(3):66-73. www.nas.nasa.gov/ Software/NPB/.

Cader, T., L. Westra, K. Regimbal, and R. Mooney. 2006. Air flow management in a liquid-cooled data center. ASHRAE Transactions 112(2).

Copeland, D. 2005. Review of low profile cold plate technology for high density servers. Electronics Cooling 11(2):14-18.

Dongarra, J.J., P. Luszczek, and A. Petitet. 2003. The LINPACK benchmark: Past, present, and future. Concurrency and Computation: Practice and Experience Journal 15(9):803-20.

Kendall, R.A., E. Apra, and D.E. Bernholdt. 2000. High performance computational chemistry: An overview of NWChem, a distributed parallel application. Computer Physics Communications, 128: 260-83.

Patterson, M.K., X. Wei, Y. Joshi, and R. Prasher. 2004. Numerical study of conjugate heat transfer in stacked microchannels. Proceedings of ITherm 2004, Las Vegas NV, pp. 372-80. http://www.netlib.org/linpack/.

Rasmussen, N. 2005. Cooling strategies for ultra-high density racks and blade servers. APC White Paper #46. http://www.apcmedia.com/salestools/SADE-5TNRK6_R4_EN.pdf.

Schmidt, R. 2005. Crossroads in thermal management of electronics from chip to datacom room. Presentation at the 5th International Business and Technology Summit, Natick, MA.

Scientific Computing. 2006. Power up/cool down: Increasing power and thermal management options for industry-standard servers. Webcast archived at http://www.scimag.com/coolpower/.

Schwarzkopf, J.D., T.L. Tilton, C.T. Crowe, and B.Q. Li. 2005. A low profile thermal management device for high power processors using enhanced flow boiling techniques and perfluorocarbon fluids. Presentation at the 2005 ASME Summer Heat Transfer Conference, San Francisco, CA.

Vendor #1. 2006. APC Corporation, "InfraStruXure[TM] InRow RC" specifications data sheet. http://apcmedia.apcc.com/pdf_downloads/litpdfs/998-1004.pdf.

Vendor #2. 2005. Libert Corporation, "Taking Heat Removal to the Extreme," Liebert sales brochure. http://www.liebert.com/assets/products/english/products/env/xtreme/60Hz/bro_12pg/acrobat/sl_11265.pdf.

Tahir Cader, PhD

Levi Westra

Andres Marquez, PhD

Harley McAllister

Kevin Regimbal

Tahir Cader is the technical director, Levi Westra is a lead systems engineer, and Harley McAllister is a program manager in the Government Computing Group, Isothermal Systems Research, Liberty Lake, WA. Andres Marquez is a scientist in the Applied Computer Science Department in the Computational Sciences and Mathematics Division and Kevin Regimbal is the information technology manager at Pacific Northwest National Laboratory, Richland, WA.
Table 1. Comparison of Air-Cooled CPUs to Spray-Cooled CPUs (Testing at
Authors' Facility)

Test Name 082705b

Coolant Air
LOAD BurnI2
Avg. ambient, [degrees]C ([degrees]F) 14.8 (58.6)
Std. dev. amb., [degrees]C ([degrees]F) 2.1 (35.8)
Pump discharge, psid (kPad) NA (NA)
Res. press., kPaa (psia) NA (NA)
PF temp., [degrees]C ([degrees]F) NA (NA)

Test Name 071805a

Coolant Spray
LOAD BurnI2
Avg. ambient, [degrees]C ([degrees]F) 15.5 (59.9)
Std. dev. amb., [degrees]C ([degrees]F) 1.5 (2.7)
Pump discharge, psid (kPad) 20.0 (137.9)
Res. press., kPaa (psia) 15.1 (104.1)
Spray temp. [degrees]C ([degrees]F) 19 (66.2)

 CPU1 [degrees]C
 CPU0 [degrees]C ([degrees]F) ([degrees]F)

M1005 (Node3) 73 (163.4) 68 (154.4)
M1014 (Node2) 70 (158.0) 72 (161.6)
M1022 (Node1) 67 (152.6) 71 (159.8)
Average 70.0 (158.0) 70.3 (158.5)
Max 73 (163.4) 72 (161.6)
Min 67 (152.6) 68 (154.4)
StDev 3.0 (5.4) 2.1 (3.8)

 CPU1 [degrees]C
 CPU0 [degrees]C ([degrees]F) ([degrees]F)

M1005 (Node3) 64 (147.2) 62 (143.6)
M1014 (Node2) 59 (138.2) 59 (138.2)
M1022 (Node1) 60 (140.0) 62 (143.6)
Average 60.7 (141.3) 61.0 (141.8)
Max 64 (147.2) 62 (143.6)
Min 59 (138.2) 59 (138.2)
StDev 2.9 (5.2) 1.8 (3.2)

Table 2. Comparison of Air-Cooled and Spray-Cooled Peripheral Devices
(Testing at Authors' Facility)

File 082704b
Coolant Air
LOAD BurnI2
Avg. ambient, [degrees]C ([degrees]F) 20.4 (68.7)
Std. dev. amb., [degrees]C ([degrees]F) 3.6 (6.5)
Pump discharge, kPad (psid) NA (NA)
Res. press., kPaa (psia) NA (NA)
Spray temp., [degrees]C ([degrees]F) NA (NA)

File 071805a
Coolant Spray
LOAD BurnI2
Avg. ambient, [degrees]C ([degrees]F) 16 (60.8)
Std. dev. ambient, [degrees]C ([degrees]F) NA (NA)
Pump discharge, kPad (psid) 20.00 (137.9)
Res. press., kPaa (psia) 14.4 (99.3)
Spray temp., [degrees]C ([degrees]F) 20 (68.0)

 Adjusted
 Temp., Temp., Difference,
 [degrees]C [degrees]C [degrees]C
Node Comp. ([degrees]F) Node Comp. ([degrees]F) ([degrees]F)

1 Ambient 17.0 (62.6) 1 Ambient 15.7 (60.3) 1.3 (2.3)
 DIMM 1A 43.4 (110.1) DIMM 1A 34 (93.2) 8.1 (14.6)
 North 44.5 (112.1) North 37.5 (99.5) 5.8 (10.4)
 Bridge Bridge
 ETHER 2 63.4 (146.1) ETHER 2 53.1 (127.6) 9 (16.2)
 FET 3 45.8 (114.4) FET 3 30.0 (86.0) 14.6 (26.3)
 HDD1 31.1 (88.0) HDD1 29.4 (84.9) 0.4 (0.7)
2 Ambient 24.1 (75.4) 2 Ambient 18.2 (64.8) 5.9 (10.6)
 DIMM 1A 49.0 (120.2) DIMM 1A 35.3 (95.5) 7.8 (14.0)
 North 52.5 (126.5) North 43.3 (109.9) 3.3 (5.9)
 Bridge Bridge
 ETHER 2 71.3 (160.3) ETHER 2 57.6 (135.7) 7.8 (14.0)
 FET 3 54.3 (129.7) FET 3 34.3 (93.7) 14.1 (25.4)
 HDD1 35.5 (95.9) HDD1 30.4 (86.7) -0.8 (-1.4)
3 Ambient 20.0 (68.0) 3 Ambient 14.0 (57.2) 6.0 (10.8)
 DIMM 1A 49.0 (120.2) DIMM 1A 36.5 (97.7) 6.5 (11.7)
 North 51.4 (124.5) North 38.3 (100.9) 7.1 (12.8)
 Bridge Bridge
 ETHER 2 70.4 (158.7) ETHER 2 54.6 (130.3) 9.8 (17.6)
 FET 3 53.6 (128.5) FET 3 31.2 (88.2) 16.4 (29.5)
 HDD1 37.9 (100.2) HDD1 33.3 (91.9) -1.4 (-2.5)

Table 3. Test Results from Dynamic Acceptance Tests (Testing at PNNL)

 HX in, HX out, Reservoir Discharge
 [degrees]C [degrees]C Pressure, Pressure,
 ([degrees]F) ([degrees]F) kPaa (psia) psid (kPad)

Min 27 (80.6) 18 (64.4) 95.8 (13.9) 136.5 (19.8)
Max 30 (86.0) 21 (69.8) 105.5 (15.3) 138.6 (20.1)
Average 27 (82.2) 20 (68.0) 99.3 (14.4) 137.9 (20.0)

Table 4. Tabulated Robustness Test Results

 PASS/FALL
Component Test Criterion 1/10/2006 3/10/2006

Spray-Cooled Cold Plate Fluid flow rate (and pass NA
 pressure drop)
Fluid Tubing Tubing has not NA pass
 collapsed under
 vacuum
 Tubing not deformed at NA pass
 high pressure
Rack Manifold Manifold is not leaking NA pass
 Manifold not deformed NA pass
 at high pressure
Pump Pumps respond to speed NA pass
 changes
Pump Motor Controllers Pumps respond to speed NA pass
 changes
 Pumps respond fail over NA pass
 command
Fluid Level Sensor Level sensor responds pass pass
 to input commands
Pressure Transducers Power supply normal NA pass
 while cycling valve
 Operational when load NA pass
 applied to 12 VDC
Mechanical Switches Power switch cycled 10 pass pass
 times
Internal Cabling Power supply normal NA pass
 while cycling valve
 Operational when load NA pass
 applied to 12 VDC
 Pumps attain/maintain NA pass
 specified pressure
TMU Power Supplies Power supply normal NA pass
 while cycling valve
 Operational when load NA pass
 applied to 12 VDC
 Operational when load NA pass
 applied to 12 VDC
TMU Sealing No detectable leaks NA pass
Cooling System Controller Cooling system responds pass pass
 to test inputs
 Pumps respond/fail over pass pass
 command
 Pumps attain/maintain NA pass
 specified pressure
External Cabling Pumps attain/maintain NA pass
 specified pressure
 Power supply normal NA pass
 while cycling valve
 Operational when load NA pass
 applied to 12 VDC
Water Detection Facility detects alarm, NA pass
 valves respond
Water-Throttling Valve Valve responds to NA pass
 control inputs
Computer Fail-Safe All nodes shut down NA pass
 because of watchdog
Computing Robustness Execute High NA NA
 Performance Linpack

 PASS/FALL
Component Test Criterion 4/14/2006

Spray-Cooled Cold Plate Fluid flow rate (and NA
 pressure drop)
Fluid Tubing Tubing has not pass
 collapsed under
 vacuum
 Tubing not deformed at pass
 high pressure
Rack Manifold Manifold is not leaking pass
 Manifold not deformed pass
 at high pressure
Pump Pumps respond to speed pass
 changes
Pump Motor Controllers Pumps respond to speed pass
 changes
 Pumps respond fail over pass
 command
Fluid Level Sensor Level sensor responds pass
 to input commands
Pressure Transducers Power supply normal pass
 while cycling valve
 Operational when load pass
 applied to 12 VDC
Mechanical Switches Power switch cycled 10 pass
 times
Internal Cabling Power supply normal pass
 while cycling valve
 Operational when load pass
 applied to 12 VDC
 Pumps attain/maintain pass
 specified pressure
TMU Power Supplies Power supply normal pass
 while cycling valve
 Operational when load pass
 applied to 12 VDC
 Operational when load pass
 applied to 12 VDC
TMU Sealing No detectable leaks pass
Cooling System Controller Cooling system responds pass
 to test inputs
 Pumps respond/fail over pass
 command
 Pumps attain/maintain pass
 specified pressure
External Cabling Pumps attain/maintain pass
 specified pressure
 Power supply normal pass
 while cycling valve
 Operational when load pass
 applied to 12 VDC
Water Detection Facility detects alarm, pass
 valves respond
Water-Throttling Valve Valve responds to pass
 control inputs
Computer Fail-Safe All nodes shut down pass
 because of watchdog
Computing Robustness Execute High pass
 Performance Linpack

* "NA" indicates that test was not conducted.
COPYRIGHT 2007 American Society of Heating, Refrigerating, and Air-Conditioning Engineers, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2007 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Cader, Tahir; Westra, Levi; Marquez, Andres; McAllister, Harley; Regimbal, Kevin
Publication:ASHRAE Transactions
Geographic Code:1USA
Date:Jan 1, 2007
Words:7806
Previous Article:Liquid cooling for extreme heat densities.
Next Article:Comparison between underfloor supply and overhead supply ventilation designs for data center high-density clusters.
Topics:


Related Articles
NEC COMPUTERS INC. ANNOUNCES EXPRESS5800/120RC-2.
Escaping the heat.
Remote access eliminates 'house calls': console switches use IP technology to streamline and improve KVM access.
Liquid cooling for extreme heat densities.
Comparison between underfloor supply and overhead supply ventilation designs for data center high-density clusters.
Capture index: an airflow-based rack cooling performance metric.
Best practices for data center thermal and energy management--review of literature.
Liquid cooling architectures for computer systems of high availability.
How one data center kept its cool: carmaker enhanced delivery of chilled air to decrease temperatures, and increase reliability and capacity.
HP First to Ship One Million Blades.

Terms of use | Privacy policy | Copyright © 2022 Farlex, Inc. | Feedback | For webmasters |