HP envisions data centers 100 times more efficient than current designs – Sander Olson interviews Partha Ranganathan

Partha Ranganathan is Hewlett Packard’s Principal Investigator for HPs exascale datacenter project and is an expert on data center design. Modern data centers are not nearly as efficient as they could be and waste prodigious amounts of energy. Unless these inefficiencies are addressed, exascale supercomputers and data centers will never be feasible. In an interview with Sander Olson, Ranganathan discusses the myriad challenges involved in reaching exascale computing in data centers and supercomputers, and discusses designing data centers 100 times as efficient as current ones.

Partha Ranganathan
Question: HP is doing extensive R and D related to cloud computing. What is causing the computing industry to shift to cloud computing?
Answer: The computer industry is quickly migrating to the “cloud” – towards having the preponderance of data and computation taking place in data centers and servers instead of occurring locally. Data has actually been growing at a rate of 4 to 10 times that of Moore’s law, and as a result more and more work is being done in the cloud. These trends will continue for the foreseeable future, making the rise of cloud computing inevitable.

Question: You have argued that computing is entering the “insight era”. What do you mean by that?
Answer: Huge amounts of data are now being continuously generated. Gleaning useful information from this deluge of digital data, deriving insights from this information, will provide valuable insights into many things. That is the “killer app” for cloud computing. So the industry is transitioning from the information era to the insights era.
Question: What is the Exascale datacenter project?
Answer: The computing industry has transitioned from gigascale to terascale to petascale, The next phase is exascale, which we should reach within the next decade. Exascale is 10 to the 18th power, or one followed by 18 zeros. So it is a huge number. The technical challenges that we need to surmount in order to reach exascale are daunting, which is why we created the exascale datacenter project.
Question: What exascale challenges are the most vexing?
Answer: Although many technical issues need to be surmounted to reach exascale, the power issue is in many respects the most important. We will need to increase power efficiency by a factor of 10 in order to reach exascale. Large data centers currently consume megawatts, and have power bills in the millions of dollars per year. So scaling up to exascale simply by adding more of everything just isn’t feasible. There exists a practical limit, in effect a “power wall” that must be addressed.
Question: Are there other “walls” that need to be surmounted in order to reach exascale?
Answer: A modern data center can have 80 million managed objects, and there is a limit to the number of objects that can be concurrently managed. We think of that as being a “manageability wall”. We also think in terms of a “reliability wall”. As the number of components in a data center increases, keeping everything working reliably becomes increasingly difficult. There are also cost issues to building, powering, and maintaining an exascale data center on a reasonable budget.
Question: The heart of a data center are the servers. How will servers evolve during the next decade?
Answer: First and foremost, servers need to become more efficient. By 2018, the amount of energy spent in moving data could be about 50-80% of the energy used by the entire system. Current servers operate using deep hierarchies of memory – registers, L1 cache, L2 cache, main memory, and disk storage. This memory hierarchy is inefficient and poorly suited to energy efficiency. We need to devise a better memory paradigm.

Question: How could memory be made more efficient?
Answer: One way would be to use memristors in memory. Memristors are two-terminal components developed by HP Labs, the company’s central research arm, that could serve either as memory or as logic. Memristors could be used to combine the logic and the memory in a single area, instead of having memory in a separate section. By combining logic and memory, we could essentially eliminate the memory bottleneck which plagues current computer architectures.
Question: Could memristors eliminate the need for magnetic hard drives?
Answer: Magnetic hard drives won’t disappear. They will simply go to the next level of storage, which is archival. The cost-per-bit for magnetic hard drives will probably always be lower than for a comparable solid-state-drive. But by eliminating spinning disks from most mainstream usage, we can simultaneously reduce power consumption and increase overall performance.
Question: How much electricity does a large data center currently consume? To what extent can power usage be reduced?
Answer: A large data center can consume megawatts. The carbon footprint of the world’s data centers is comparable to that of the aviation industry. Worldwide, the industry spends about $40 billion annually on running and cooling data centers. A large data center will cost millions of dollars per year in electricity costs alone. We are confident that by using relatively simple and straightforward approaches, in the short run we can cut electricity costs in half. But if we look at the data center holistically, we can envision eventually reducing power by a factor of 10 or even 100.
Question: Has HP done any research into using GPUs in data centers?
Answer: Yes, we already offer GPU computing modules. We recently have a paper at a conference where we spoke about the role of GPUs in data centers. GPUs provide the most efficient hardware for performing certain types of calculations, and will be one element in the “hierarchy of computing” I mentioned.
Question: Is HP doing any research directly related to building an exascale supercomputer?
Answer: Although HP is not currently involved in the R&D effort to build an exascale supercomputer, there is a considerable amount of overlap between the problems faced in building and operating a supercomputer and a data center. At the end of the day, we see ourselves innovating in the area of computing more than in just computers. So most of the issues pertaining to data center computing are directly applicable to high performance computing.
Question: Can you foresee any new uses for data centers during the next decade?
Answer: I can see a number of new uses for data centers in the next ten years. During the next decade, I see a paradigm shift in computing, where “intelligent disks” merge data and computing together. The data will be able to examine itself and derive insights. This is not only far more efficient than current memory paradigms, but it allows data centers to view digital information in an entirely new light. This computing paradigm is actually similar to the way the human brain works, and it opens up a plethora of potential applications.

If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks