Monday, November 19, 2007

Data Center Energy Management: Reliability vs. Efficiency

If you have been building or operating data centers during the last 20 years or so then you know that the mantra has long been "reliability, reliability, reliability." While designs have changed in response to new IT technologies and their implications to infrastructure provision, one thing has remained constant - Uptime is King. Data center, and hence, energy reliability expressed as a percentage of time is typically measured at the "nine nines" level. To get there we have redundant systems to redundant systems, and sometimes solve problems through brute force. Many older data centers are grossly inefficient in today's terms with little or no control over server utilization and its attendant cooling and power demands.

EPA estimates that U.S. data centers currently consume 1.5% of all U.S. generated electricity at a cost of $4.5 billion. They expect the cost to grow to $7.5 billion by 2011 and require construction of ten additional large generation facilities. Since the EPA is in the business of discouraging this type of development expect them to be very active in helping you solve your energy problem. Mandated server utilization and infrastructure efficiency levels are already in the works and headed for Congressional desks, expect new regulations coming to a data center near you soon. Read EPA report here.

That isn't to say that we are without options. Most utilities complain that their rebate programs for infrastructure modernization are under utilized. Server management and more efficient and "leaned" cooling system technologies are beginning to arrive in a meaningful fashion. And, distributed generation technologies such as fuel cells are finally achieving critical mass and becoming more affordable.

The point is this: Your data center has most likely been part of the problem. Now it is time for it, and you, to be part of the solution. The need is here, as are the opportunities. Are you?

No comments:

Post a Comment