现在的位置: 首页 > 综合 > 正文

Demand response: using your data center to make money

2019年04月30日 ⁄ 综合 ⁄ 共 4910字 ⁄ 字号 评论关闭


Demand response: using your data center to make money


Demand response: using your data center to make money

Raised floor in a data center

Data centers waste a substantial amount of energy—and money.  The reason is they are designed for capacity, performance and reliability, usually at the expense of efficiency.  Such waste unnecessarily increases capital and operational expenditures and can result
in finite resources (particularly power and space) being exhausted, thereby creating a situation where the organization is outgrowing its data center.

Fortunately, there are several ways IT and facility managers can cooperate to save money by improving overall data center efficiency—sometimes dramatically—without adversely impacting capacity, performance or reliability.  There are also additional ways to
save and potentially even monetize the data center by participating in demand-response programs and ancillary services being offered by electric utilities.

First the savings

For the data center facility itself, the most significant savings are normally found in the cooling system.  In a typical data center only about half of the power available is actually used by IT equipment, with the rest used mostly by cooling.  Much of that
power can be reclaimed by eliminating cooling inefficiencies, upgrading the cooling system to allow for variable cooling and/or making greater use of outside air.

Another way to reduce cooling power consumption is to adopt a hot/cold aisle configuration and increase the cold aisle inlet temperatures to 80.6°F (27°C) as recommended by the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE). 
To avoid creating hot spots that waste power and cause outages, it may be necessary to balance the equipment, and then calibrate and continuously monitor the cold aisle temperature to maximize cooling efficiency and minimize problems.

With the IT equipment, the most significant savings are normally found in what is the biggest source of waste in most data centers: poor server utilization.  An obvious first step is to consolidate and virtualize the servers.  Virtualizing the servers can increase
overall utilization from about 10% (typical of dedicated servers) to between 20% and 30%, and over 50% with more dynamic management systems.  Successful consolidation and virtualization initiatives can also reclaim a considerable amount of rack space and stranded
power.

Even the most virtualized and recently refreshed server configurations, however, waste power during periods of low application demand.  Total server power consumption can be reduced by up to 50% by matching online capacity (measured in cluster size) to actual
load in real-time. Run-books can be used to automate the steps involved in resizing clusters and/or de-/re-activating servers, whether on a predetermined schedule or dynamically in response to changing loads.  These dynamic “stretchable” cluster configurations
are the most energy efficient way to support variable application demand with all active servers being utilized at 70-80% as capacity is added or removed as necessary.

An increasingly important aspect of power conservation efforts is the energy efficiency of the individual servers themselves, and the most efficient servers are the ones with the highest number of transactions per second per Watt (TPS/Watt).  The PAR4 Efficiency
Rating system used in the Underwriters Laboratories’ UL2640 standard is the most accurate means for IT managers to compare the transactional efficiency of legacy servers with newer ones, and newer models of servers with one another.  Indeed, assessing the
energy efficiency of servers should now be considered a best practice during every hardware refresh cycle and whenever adding capacity.

Making money with your data center

Now for the “monetizing” opportunities.  The wholesale price of electricity becomes extraordinarily high during periods of peak demand, which typically occur late in the afternoon on hot summer days. An attractive alternative to utilities paying these high
rates on the spot market is paying commercial and industrial customers to reduce demand. The payment for this “demand response” is based on the reduction in historical consumption during similar peak-demand events and can be significant, depending on the local
market conditions.

One obvious way to cut back consumption is to conserve on cooling by letting the temperature increase for a few hours during demand response event.  If the data center is already operating at or near the higher temperatures recommended ASHRAE, the savings may
be too minimal and/or too risky. With a two-hour notice, data centers may pre-cool, temporarily decrease the temperature and let it run up during the demand response event. However, there can be some risk of hot spots forming, which can be mitigated by temporarily
power-capping some or all servers, and/or by reducing the server cluster size for less critical applications.

Finally, a demand response event is the perfect time to test the backup generator.  Such load tests should be performed periodically any way (to assess operation and refresh the fuel), and the payment from the utility (again depending on the local market) could
well cover cost of the replacement fuel and then some. The same applies to failover to another site, if automated, this can be done on the fly and would generate payments as well.

About the Author: Clemens Pfeiffer is the CTO of Power Assure and is a 25-year veteran of the software
industry, where he has held leadership roles in process modeling and automation, software architecture and database design, and data center management and optimization technologies.

Power Assure is one of the companies sponsoring next week’s DatacenterDynamics Converged
conference in San Francisco
.

抱歉!评论已关闭.