Language Flags

Translation Disclaimer

Green Computing Report

The number one news source for energy-efficient
and eco-friendly computing in the datacenter

Subscribe | Sign In

Subscribe RSS Facebook Twitter

November 06, 2012

Want A More Efficient Data Center? Get Rid Of Your Old One.

AMD executive Andrew Feldman says the only way to fix your data center efficiency is to move to a new one

Andrew Feldman has some simple advice for anyone worried about how to increase the efficiency of their datacenters. Sure, you want to upgrade old servers, improve the infrastructure, eliminate zombie servers (which McKinsey says may average 30% of the total) and virtualize the datacenter. But how do you find the defective gear and upgrade?

Feldman's solution: “Move.”

Feldman, co-founder of microserver maker SeaMicro and now corporate vice president and general manager of AMD's Data Center Solutions group, says it's critical that datacenter managers upgrade and modernize. But the process is something akin to trying to clean out and organize a house you've been living in for 50 years. There's just too much accumulated junk, too much legacy equipment, too many outdated electrical circuits and leaky plumbing. The only way to do it right is to start a new facility somewhere else. “When people change datacenters, they're appalled at what they discover," he says. "They throw a huge amount of stuff away, just as you do when you move homes.”

Feldman gave this assessment at the 2012 Data Center Efficiency Summit, sponsored by the Silicon Valley Leadership Group, in Sunnyvale Calif. on October 24. I also caught up with him after the talk to get a little more perspective.

Feldman refers to the problem as the “nuts and bolts pain of actually running a datacenter. Retiring old facilities is clearly something we need to think about,” he says. Zombie servers can still use up two-thirds of the power of one with a full load. “As an industry, we need to move up to this challenge.”

The rise of giant datacenters from the likes of Google, Amazon, eBay, and cloud companies that host services for smaller customers, was a perfectly logical way to meet burgeoning demand. The first wave of datacenters was designed to ensure nearly 100-percent uptime for their social networks, websites and cell phones, whatever the cost. Feldman brings up a counter-analogy to illustrate his point: the dreaded call center. “When you call American Express or United Airlines, they say, 'Please hold.' Well, they know exactly how to ensure that you never have to hold:  Hire a lot of people. United would have to have a huge number of people always working at the call centers in order to keep anyone from sitting on hold. That's exactly what we do with computing. We have an enormous amount of computing power so that at absolute peak demand, nobody waits.”

Nobody wants datacenters to start making customers--or compute jobs--wait, of course. When Google or Facebook or Amazon go down, customers are irate. It makes the front page of newspapers (and maybe online news sources if they can connect.) But it is possible to preserve uptime by becoming more efficient. “The problem is not that the cloud is inefficient,” he's quick to point out. “Most of the time it's much more efficient than 12 servers at a medium-sized business. Startups went to the cloud, and that was a movement toward efficiency.” But it's now time to make datacenters more efficient. “We have to invent things that help the big guys get better.”

The industry is moving into a new era, one with virtualized data farms, more efficiently designed facilities, new energy and cooling technology, and servers that can be put to sleep in a low power mode and woken up quickly. In the complex infrastructure of giant datacenters, retrofitting just may not be the solution. Creating a new datacenter may be expensive, but the payoff in efficiency can be enormous, as can be the ability to select a new location with the right natural resources, power grid and climate. 

As an executive who's involved in the design and construction of miniservers, Feldman is naturally keen on the idea of taking a new approach to the server technology itself. But he points out that some obvious technologies still haven't been incorporated into systems. “Flash technology has yet to make it into the server world,” he says. Our cell phones, laptops and tablet computers can be put into a low-power sleep mode and restarted instantaneously, so why not servers? “It's a challenge for the hardware makers,” he says. “Deep sleep is on everybody's agenda right now. Unfortunately, in the chip business, when you have an idea, it's three to four years before you can implement it into an integrated circuit.”

And of course, as co-founder of miniserver maker SeaMicro, he's really big on the use of miniservers, which are designed to maximize I/O and storage capacity rather than CPU power in order to make virtualization and distributed systems easier. He likens the technology to colonies of ants working in concert. "They're some of the most efficient organisms in the forest for moving things from one place to another. You can't move boulders, but you can move a huge number of things.”

Considering the September New York Times article, “Power, Pollution and the Internet,” there can also be a public relations benefit from going green and reducing a company's carbon footprint. But that's not generally going to be the driving factor for most companies. There have to be compelling business reasons. Still, that's not difficult to find. In the information age, he says, “computers are a company's manufacturing floor. Lower the cost of datacenters and computing, and they they can manufacture their products more cheaply.”

It will, however, take some time to make the transition, to make sure that the power load goes to the servers needed to complete jobs and not to idle systems, or even to move all of a datacenter's traffic to another facility in an emergency. “These are hard problems,” he says. “[Load] balancing is really hard. It took the electric utility industry 70 years to get it right.”

Fortunately, technology moves faster than that. But the industry may have to start moving in order to get the job done.

Share Options


» Subscribe to our weekly e-newsletter


There are 0 discussion items posted.


Most Read Features

Most Read News

Most Read Off the Wire

Sponsored Multimedia

Cray CS300-AC Cluster Supercomputer Air Cooling Technology Video

The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.

View Multimedia

SGI DMF ZeroWatt Disk Solution

In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.

View Multimedia

More Multimedia »