Data centers are notorious for intensive energy consumption. According to the U.S. Department of Commerce, these facilities have typical power densities up to 40 times higher than commercial office buildings. Because such concentrations of power usage are expensive and taxing to the electrical grid, data centers are excellent targets for efficiency improvements. The first things to assess in any load reduction program are server and HVAC loads, because they are the primary power consumers in a data center (Figure 1).
An attractive first target in an efficiency improvement program is the IT power load, because savings can be realized for little or no cost and are amplified through the reduction of cooling loads. All of the power input to IT equipment eventually turns to heat, which must then be removed by the cooling system. Thus, if the IT equipment uses less energy, the accompanying reduction in the facility’s cooling load will lead to additional energy savings. Although there is considerable variation among facilities, a typical data center that reduces its computer load power requirements by 1.0 kilowatt (kW) would also offset approximately 0.6 kW of air-conditioning power.
Efforts to reduce the baseline consumption of data centers are effective because data centers typically exhibit high load factors, with little distinction between baseline and peak loads. This high load factor is driven by two root causes: First, data centers operate 24 hours a day, so there is little relief during what would normally be off hours. Second, most servers in data centers typically run below 50 percent of their maximum utilization and draw about 70 percent of their peak load while idle, according to Lawrence Berkley National Laboratory (LBNL) (Figure 2).
LBNL estimates that the typical data center could save between 20 and 40 percent of its annual energy costs, with savings of up to 50 percent possible using aggressive strategies. The primary targets for improvement are IT and HVAC loads, and the payback period on many improvements is between one and three years.
Often, even though longer-term, capital-intensive solutions have the highest potential for savings, there are numerous low-cost changes that can have an immediate impact on your bottom-line expenses. For maximum effect, focus first on the computing and cooling loads.
Diagnose your system. The DC Pro online suite, a tool created by the U.S. Department of Energy (DOE), characterizes data center energy use, suggests potential efficiency improvements, and estimates costs as well as expected savings. It is available free from the DOE’s web site and requires basic information about a data center, such as a description of the facility; current utility costs; and system information for IT, cooling, power, and on-site generation. The tool allows energy managers to select measures that best suit their particular data center, and offers improvement suggestions that include HVAC, air management, IT equipment, and software changes.
Check power-management software settings. According to the DOE, simply utilizing the power-management software that comes with most servers can often reduce annual energy consumption by about 30 percent. Power-management features adjust power usage in response to the processor’s activity level, enabling the data center to run at the minimum power level necessary to perform required tasks. These adjustments can be scheduled—powering some servers down or even off when workloads decrease (at night and on weekends, for many corporate servers)—or continuous, scaling microprocessor operating voltage or frequency in response to server demand.
Eliminate unused equipment. A study by Sun Microsystems found that 8 to 10 percent of servers deployed are unused. After identifying potentially unused servers, consider simply shutting them off for a period of time—perhaps 90 days. Then, if there are no complaints, remove them from the system. When Sun removed its unused servers, it reported a 14 percent drop in total electrical load.
Broaden temperature and humidity ranges. Many data centers operate within highly restricted temperature and humidity ranges despite the fact that studies by LBNL and ASHRAE (the American Society of Heating, Refrigerating, and Air-Conditioning Engineers) show a much wider acceptable range. Widening these settings, as well as increasing the setpoint temperature, can reduce HVAC costs. Recommendations for the thermal design of data centers can be found in the ASHRAE publication “Thermal Guidelines for Data-Processing Environments, Second Edition,” which is available for purchase from ASHRAE’s web site.
Hire a building commissioner. The U.S. Environmental Protection Agency (EPA) has stated that ongoing commissioning of a typical data center could improve HVAC efficiency by 20 percent. Commissioning is a process in which engineers observe a building and perform a tune-up to ensure that its systems are operating efficiently. Savings typically result from reducing HVAC waste by resetting existing controls, performing simple system maintenance, and identifying inefficient equipment to replace. This process usually costs between 5 and 40 cents per square foot.
After the quick and easy methods of reducing energy consumption have been exhausted, there are an abundance of potential cost-saving measures to consider. In this section, we discuss a few of the most promising areas to investigate.
Benchmark your facility. It can be difficult to identify whether a data center is wasting energy. Implementing a benchmarking program will calculate the power density of a facility and give data center managers a quantitative method of evaluating the data center. According to LBNL, typical data center power density is about 50 watts per square foot. If a particular facility is much above that average, it is likely a good candidate for improvements. LBNL has published an online self-benchmarking guide (PDF) that provides an excellent protocol for assessing the power consumption of a data center.
Calculate your metrics. Two metrics that are widely used to characterize a data center’s power usage are power usage effectiveness (PUE) and data center infrastructure efficiency (DCiE). Both were developed by the Green Grid; detail on Geen Grid’s method of calculation is available in the white paper “The Green Grid Data Center Power Efficiency Metrics: PUE and DCiE.” In general, DCiE is the ratio of IT equipment power to total data center power, and PUE is its reciprocal. For example, if a given data center uses about 2 watts of power for every watt that is used for computing, the PUE would be 2, making the DCiE 1/2 (or 50 percent) Both metrics are most useful when monitored regularly, and they can be used to quantify the impact of infrastructure changes. Although PUE and DCiE can be used to loosely compare data centers, direct comparison may not be accurate.
Install infrastructure management software. Infrastructure management software can simplify the benchmarking process, and it provides continuous monitoring of system performance via network sensors. Its real-time benchmarking can provide instant notification of any system failures or validation of efficiency improvements. Additionally, the data the software collects will facilitate and improve the effectiveness of other measures, including airflow management.
Go virtual. Virtualizing your servers can substantially reduce energy and capital costs and can also facilitate system backups. Consolidating dedicated servers into fewer virtualized units decreases the number of required systems, which can reduce IT, HVAC, and other associated energy demands. Although virtualization can be helpful in many situations, the relatively high cost of implementation and shortage of well-trained IT talent are the greatest deterrents. However, despite the costly initial investment, the payback period is between one and three years in many cases. Eaton Corp. has published a white paper, “Is Your Data Center Ready for Virtualization?” (PDF), that details the benefits and challenges associated with server virtualization.
Buy Energy Star–rated equipment. Historically, it has been very difficult to compare the relative power consumption of two servers. To assist data center operators with this evaluation, the EPA developed an Energy Star rating specification for servers. Though this is a positive development, the rating has two core limitations: It focuses solely on idle power consumption (despite the fact that few servers are ever completely idle), and it fails to address blade servers. Still, an Energy Star–rated server is likely more efficient than one that does not meet the specifications.
Check server power-consumption figures before purchasing. Another resource available to assist purchasers in comparing server power usage is Standard Performance Evaluation Corp.’s (SPEC’s) SPECpower benchmark database, which includes power-consumption figures for many servers at varying load intervals. Keep in mind, however, that though this database is a valuable tool, the performance is based on a single workload, and it is unclear how accurately the power-draw figures project real server performance.
Spin fewer disks. Implementing a massive array of idle disks (MAID) system can save up to 85 percent of storage power and cooling costs, according to manufacturers’ claims. Typically, data is stored on disks that must remain spinning (and therefore consuming energy) for their information to be retrieved. MAID systems catalog information according to how often it is retrieved and place seldom-accessed data on disks that are kept idle until the data is needed. Keeping disks idle certainly conserves energy, but one disadvantage is a decrease in system speed, because the hard disks must “spin up” before the data is accessible.
Replace spinning disks with solid state disks. There is conflicting data quantifying the energy savings associated with flash-based solid state disks (SSDs), but there is little argument that SSD energy consumption is lower than that of hard disk drives. Another clear benefit of SSDs over hard disk drives is faster read time. However, costs are still high relative to conventional storage media, and write times are higher than hard disk drives. As a result, there are few applications in which SSD technology is currently cost-effective without utility incentives, but as prices continue to fall, it will become economical in more applications. Data Center Dynamics gives a good overview of the advantages and disadvantages of SSDs in the article “10 Essential Facts about Solid State Disks.”
Separate hot and cold air streams. With hot-cold isolation, LBNL researchers found that air-conditioner fans could maintain proper temperature while operating at a lower speed, resulting in 75 percent energy savings for the fans alone. Most data centers suffer from poor airflow management, which has two detrimental effects. First, the mixed air recirculates around and above servers, warming as it rises, making the servers that are higher up on the racks less reliable. Second, data center operators must set supply-air temperatures lower and airflows higher to compensate for air mixing, thus wasting energy. Setting up servers in alternating hot and cold aisles is the key to managing airflow. This allows delivery of cold air to the fronts of the servers and concentration and collection of the waste heat from the backs of the racks (Figure 3). As part of this configuration, operators can close off gaps within the racks to minimize airflow through them.
Reduce bypass airflow losses. A recent LBNL study found that up to 60 percent of the total cold air supply can be lost via bypass airflow when cold air returns into a hot air return duct. The main causes of bypass-airflow losses are unsealed cable cutout openings and poorly located perforated tiles in hot aisles. This type of problem can be easily eliminated by identifying the bypasses through a study of the cooling system’s airflow patterns.
Bring in more fresh air. When the outdoor temperature and humidity are mild, air-side economizers can save energy by bringing in cool outside air rather than using refrigeration equipment to cool the building’s return air. Air-side economizers have two benefits: They have lower capital costs than many conventional systems, and they reduce energy consumption. According to LBNL, an air-side economizer can achieve a 60 percent cut in cooling costs at a data center using only standard, low-cost equipment. In another recent study from the same lab, researchers found that the particulate matter concentration at servers in data centers that used air-side economizers was well below the point of potential harm. The study also stated that although the humidity-control equipment used by most data centers was usually sufficient to address humidity concerns, an assessment of local climate factors during the design process is recommended.
Use evaporative cooling. When conditions are right, water can be evaporatively cooled enough in a cooling tower that the building’s compressor and refrigerant loops can be bypassed. LBNL researchers estimate that in northern climates, the opportunity for free cooling with a water-side economizer (one type of evaporative cooler) typically exceeds 75 percent of the total annual operating hours; in southern climates, free cooling may only be available during 20 percent of operating hours. While water-side economizers are operating, the free cooling that they provide can reduce the energy consumption of a chilled-water plant by up to 75 percent.
Upgrade your chiller. Many general best practices for chilled-water systems also apply to cooling systems for data centers, including using variable-speed chillers for pumping and for optimizing chilled-water temperatures. If your facility isn’t already taking advantage of these techniques, consult with an HVAC expert—you may find that highly cost-effective savings are available.
Install ultrasonic humidification. Ultrasonic humidifiers use less energy than other humidification technologies because they do not boil the water or lose hot water down the drain when flushing the reservoir. Additionally, the cool mist absorbs energy from the supply air and causes a secondary cooling effect in a data center application with concurrent humidification and cooling requirements. In one DOE case study, when a data center humidification system was retrofitted with an ultrasonic humidifier, it reduced humidification energy use by 96 percent.
Cool server cabinets directly. Although some facility managers have a reflexive aversion to having water anywhere near their computer rooms, some cooling systems bring the water very close—all the way to the base of the server cabinets or racks. This practice allows the cabinets to be cooled much more directly and efficiently than they would be if the entire room were cooled (Figure 4). Hewlett-Packard and IBM now offer direct-cooled computer racks, and many industry experts expect that this is the wave of the future.