There are three reasons why, according to a report by
McKinsey and the Uptime Institute:
- About 15 percent of the servers in a typical data center are 'comatose';
- Companies spend big money for virtualization software to get rid of hardware, but then don't use it;
- Companies don't buy energy-efficient computer hardware, though they could.
But at the time, there were no costs associated with the energy usage. Uptime now has an estimate, and the number is big.: a firm with a 30,000 sq. ft. data center could save $144 million a year by more carefully managing it.
Looking at the numbers, it's obvious that most companies do not have problems of this scope. Uptime's members are 100 or so very large companies, with data centers that average 50,000 sq. ft. And the bulk of that $144 million a year saved would come from not having to build new data centers to house new servers.
Energy czars probably only make sense, if at all, for very big companies -- Uptime told me today that as far as it knows, only one of its 100 members, all very large corporations, actually has one. But there are some useful nuggets for everyone in Uptime's data.
- The typical PC server is responsible for the emission of 4 tons of greenhouse gases each year.
- The typical server costs a minimum of $420 per year in electricity, and that makes up about 1/3rd of the average annual cost to run the server, even in a highly efficient data center.
- Businesses of all sizes are adopting Web-based applications, a trend expected to accelerate. Energy-conscious firms â€" and all firms should be -- could start doing things like asking for CADE information from the companies they do business with.
- If you're buying servers, buy ones that use efficient power supplies and better fans, and be careful of how much memory you use.