At some point, the advancement in computer technology will lead to computing at one exaflops, or a thousand petaflops. Achieving this landmark goal – aka exascale computing – is driving datacenters to meet the power and cooling requirements necessary to house that sort of computational power.
Wu Feng of Virginia Tech University, Terri Quinn of the Lawrence Livermore National Laboratory (LLNL), Alan Lee, Corporate Vice President of Research and Advanced Development, AMD, George Chiu, Senior Manager of IBM's Blue Gene supercomputer, and Deva Bodas of Intel recently discussed exascale computing and energy efficiency. Their goal was to get the conversation about the topic kick-started.
"It will take major innovation, workload management and technical advances to increase efficiency," Feng said.
Achieving energy efficient exascale computing is very important to businesses. The U.S. Council on Competitiveness conducted a survey, which found that 97% of companies surveyed could not exist or compete without HPC.
According to Feng, founder of Green500, which ranks the most energy-efficient supercomputers in the world, energy efficiency is improving exponentially for the top 25 percent of datacenters, but the median is only slightly improving, meaning a lot of datacenters are lagging in efficiency. Because of this, Feng discussed the challenge of energy efficiency by citing a DARPA study about trying to use only 20 megawatts of energy for an exascale computing system. Because that's such a lofty goal, the more realistic idea would be to shoot for something closer to 100 megawatts.
"Despite the fact that the technology will advance regardless, investing in energy efficiency now can help raise the bar on today's norms for power consumption," Feng said.
Terri Quinn frames the challenges of exascale computing in a different way. Looking through the filter of the LLNL, she doesn't think exascale is entirely about computing.
"It's about running exascale simulations on exascale technology," Quinn said. According to Quinn, the U.S. is a leader in computer simulation, with people around the world usually using U.S. technology for some or all of their simulations.
"It's getting harder to program these systems, buy them, more costly to operate," Quinn said. "The trends for competing technology are going in a direction not favorable for simulation."
Lee didn't discuss the goals of exascale computing, but rather the journey. "The path and how we achieve it is important," Lee said. According to Lee, not only should big data be looked at, but how it can be used, molded and manipulated to do things that were previously unable to be done should be examined. Exascale computers can allow companies to make minute changes to products, which would lead to millions of dollars saved by corporations.
According to Chiu, one method of improving energy efficiency is to exploit the frequency and voltage scaling of a computer. He contends that cutting the frequency for a processor by two will also sacrifice performance by a factor of two. However, it also decreases power consumption by eight times.
"In return, this allows for eight times the processors to be used for the same amount of energy, which increases productivity by a factor of four," Chiu said.
Bodas contends that energy efficiency lay in making power consumption proportional to the workload. At times, some nodes will use more energy than others. With a high performance computer, there could be 20,000 nodes that are either in use or not. If the power consumption is proportional to the workload, there will be huge variations.
"Utilities don't like that, so datacenters need to make sure they don't go over a certain percentage of use," Bodas said. "There needs to be constant power."
If power consumption is going up, datacenters have safeguards in place to make sure it doesn't get too high. However, toning down energy use also needs to be controlled without deliberately burning power.
"The key to solving this dilemma is figuring out a way to evenly allocate power consumption," Bodas said.
View the video below: