Interview of Geoffrey Fox, Director of FutureGrid, by Sander Olson

Here is an interview of Geoffrey Fox by Sander Olson. Dr. Fox is an Indiana University Professor of informatics who is the head of Indiana University’s Pervasive Technology Institute and Director of the Community Grids Laboratory at Indiana University. He is also the director of the FutureGrid Program. The FutureGrid project is a four year, $15 million project funded primarily by the National Science Foundation to ascertain optimal ways to link supercomputers together.

Question 1: Indiana University was recently awarded $15 million by the NSF to establish FutureGrid. What is FutureGrid?

Answer: FutureGrid is a part of TeraGrid. The aim of the FutureGrid project is to support the development of new system software and applications that can be simulated in order to accelerate the adoption of new technologies in scientific computing. The project will accomplish these goals by building several computing clusters at different locations with a sophisticated virtual machine and workflow based simulation environment. This will allow us to research cloud computing, multicore computing, new algorithms and new software paradigms.

Question 2 : How is FutureGrid different than the TeraGrid project, which was launched in 2001?

Answer: TeraGrid is a production system that is specifically designed to provide computing resources. FutureGrid, by contrast, is oriented towards developing tools and technologies rather than providing actual computational capacity.

Question 3: What is the Pervasive Technology Institute?

Answer: The Pervasive Technology Institute is a project funded by the Lilly foundation. It brings together Indiana’s School of informatics and computing with the University information technology Group. The PTI is very helpful for FutureGrid because it requires a collaboration of infrastructure people with researchers.

Question 4: Indiana University appears to be heavily emphasizing high-performance computing. Why is that?

Answer: High-performance computing is becoming increasingly important to most scientific endeavors and technological development. All scientific fields are getting flooded with new data, weather it be the accelerator at CERN or from a new gene-sequencing device. There is a torrent of information being produced on a regular basis. It simply isn’t feasible to properly analyze this data in a manageable period on small computer systems.

Question 5: Effectively programming multicore processors is a huge challenge. Is this problem truly intractable?

Answer: I don’t believe that programming multicore processors is a serious problem in the scientific computing and data analysis fields. On the contrary, multi-cores fit in naturally with the most scientific computing requirements. We can’t make many consumer applications such as Microsoft Word take advantage of multi cores, but most scientific problems are naturally parallel and are therefore amenable to multi-core computing.

Question 6: What impact is GPU computing having on these grid projects?

Answer: GPUs are taking the multi-core revolution to its extreme by packing dozens of cores onto a chip. Unlike CPUs, GPUs are designed to maximize floating-point performance. They have limitations, but they are still quite useful. We will have GPUs available on FutureGrid, and the National Science Foundation recently gave a grant to Oak Ridge National labs and Georgia Tech to put an experimental GPU-based system on TeraGrid.

Question 7: Do you believe that an exaflop supercomputer will ever be built?

Answer: Yes. In the eighties, scientists wondered whether a teraflop supercomputer would ever be built, and in the nineties, many doubted that a petaflop supercomputer would be feasible. But building a petaflop supercomputer has been essentially trivial, from a technology standpoint. So an exaflop supercomputer is bound to be built. How we will deal with memory, power, and interconnect issues is not obvious to me, but it seems reasonable that we can expect exaflop computers by 2020.

Question 8: Will cloud computing eventually obviate the need for organizations to buy and maintain their own computing infrastructures?

Answer: Cloud computing will be the dominant form of computing. There will be a mix of public clouds, run by companies like Amazon, and private clouds run by various organizations. The data centers that run the clouds will be placed wherever it is most cost-effective to operate them, so an American University may end up having its own cloud computing center running in Alaska or India.

Question 9: Researchers now have access to a million times more computing power than they had a couple of decades ago. What affect is this having on scientific disciplines?

Answer: This increased computing power is necessary for scientists to advance the scientific fields. Several decades ago researchers were limited to doing simple 2-d simulations because that was all that could be done on the megaflop computers of that era. Now we can do full scale, fine-grained, 3-d dynamic simulations, and this will lead to better weather forecasts, more accurate studies of the early universe, and better quantum dynamics models. High-performance computing is enabling a whole new set of simulations, and is absolutely essential for analyzing the torrent of data that is being continually produced. There is actually a Moore’s law for instruments – the instruments themselves are generating vastly more information.

Question 10: The recent roadrunner supercomputer cost $133 million. Are you concerned that the cost of building and maintaining supercomputers is becoming prohibitive?

Answer: Although the fastest supercomputers are getting more expensive, every other segment of computers is following Moore’s law and getting cheaper every year. An exaflop computer will be extremely expensive but most researchers won’t have access to it anyway. The majority of scientists will access vast yet inexpensive computing resources via cloud computing, which is a far more cost-effective computing paradigm for most tasks.

Question 11: To what extent is high-performance computing limited by bandwidth and interconnect issues?

Answer: It is true that current clouds run certain tasks inefficiently, due to the lack of specialized hardware. But communication overhead is not prohibitive for most computing tasks. Interconnect speeds are increasing and bandwidth costs are decreasing and I see these trends continuing indefinitely.

Question 12: What do you think the high-performance computing field will look like in 2019?

Answer: The real changes in the next decade will occur in the middle of the computing spectrum. At the high end, supercomputers operating at a 100 petaflops will be common, and at the low end, mobile computing is becoming ubiquitous. But the most important changes in the scientific computing field are coming in the middle, enabled primarily by cloud computing. The combination of cloud computing and multicore computing will allow scientists to routinely perform experiments and do tasks that aren’t feasible now. In a few years, every biologist will their own gene sequencing device, and everyone will have their genome sequenced. That alone will require prodigious quantities of computing resources. Huge data centers containing tens or even hundreds of thousands of servers will proliferate, and these data centers will become a dominant computing paradigm due to their cost effectiveness.

FURTHER READING

The future of scientific computing will be developed with the leadership of Indiana University and nine national and international partners as part of a $15 million project largely supported by a $10.1 million grant from the National Science Foundation (NSF). The award will be used to establish FutureGrid—one of only two experimental systems in the NSF Track 2 program that funds the most powerful, next-generation scientific supercomputers in the nation.

Futuregrid article at Teragrid site

Digital Science Center site

TeraGrid

TeraGrid is an open scientific discovery infrastructure combining leadership class resources at eleven partner sites to create an integrated, persistent computational resource.

Using high-performance network connections, the TeraGrid integrates high-performance computers, data resources and tools, and high-end experimental facilities around the country. Currently, TeraGrid resources include more than a petaflop of computing capability and more than 30 petabytes of online and archival data storage, with rapid access and retrieval over high-performance networks. Researchers can also access more than 100 discipline-specific databases. With this combination of resources, the TeraGrid is the world’s largest, most comprehensive distributed cyberinfrastructure for open scientific research.

TeraGrid is coordinated through the Grid Infrastructure Group (GIG) at the University of Chicago, working in partnership with the Resource Provider sites: Indiana University, the Louisiana Optical Network Initiative, National Center for Supercomputing Applications, the National Institute for Computational Sciences, Oak Ridge National Laboratory, Pittsburgh Supercomputing Center, Purdue University, San Diego Supercomputer Center, Texas Advanced Computing Center, and University of Chicago/Argonne National Laboratory, and the National Center for Atmospheric Research.

Online Tutorials and Training on Parallel Programming and other Supercomputer Topics

Online Tutorials and Training on Parallel Programming and other Supercomputer Topics

Teragrid user training resources and links

Teragrid presentations.

Interview of Geoffrey Fox, Director of FutureGrid, by Sander Olson

Here is an interview of Geoffrey Fox by Sander Olson. Dr. Fox is an Indiana University Professor of informatics who is the head of Indiana University’s Pervasive Technology Institute and Director of the Community Grids Laboratory at Indiana University. He is also the director of the FutureGrid Program. The FutureGrid project is a four year, $15 million project funded primarily by the National Science Foundation to ascertain optimal ways to link supercomputers together.

Question 1: Indiana University was recently awarded $15 million by the NSF to establish FutureGrid. What is FutureGrid?

Answer: FutureGrid is a part of TeraGrid. The aim of the FutureGrid project is to support the development of new system software and applications that can be simulated in order to accelerate the adoption of new technologies in scientific computing. The project will accomplish these goals by building several computing clusters at different locations with a sophisticated virtual machine and workflow based simulation environment. This will allow us to research cloud computing, multicore computing, new algorithms and new software paradigms.

Question 2 : How is FutureGrid different than the TeraGrid project, which was launched in 2001?

Answer: TeraGrid is a production system that is specifically designed to provide computing resources. FutureGrid, by contrast, is oriented towards developing tools and technologies rather than providing actual computational capacity.

Question 3: What is the Pervasive Technology Institute?

Answer: The Pervasive Technology Institute is a project funded by the Lilly foundation. It brings together Indiana’s School of informatics and computing with the University information technology Group. The PTI is very helpful for FutureGrid because it requires a collaboration of infrastructure people with researchers.

Question 4: Indiana University appears to be heavily emphasizing high-performance computing. Why is that?

Answer: High-performance computing is becoming increasingly important to most scientific endeavors and technological development. All scientific fields are getting flooded with new data, weather it be the accelerator at CERN or from a new gene-sequencing device. There is a torrent of information being produced on a regular basis. It simply isn’t feasible to properly analyze this data in a manageable period on small computer systems.

Question 5: Effectively programming multicore processors is a huge challenge. Is this problem truly intractable?

Answer: I don’t believe that programming multicore processors is a serious problem in the scientific computing and data analysis fields. On the contrary, multi-cores fit in naturally with the most scientific computing requirements. We can’t make many consumer applications such as Microsoft Word take advantage of multi cores, but most scientific problems are naturally parallel and are therefore amenable to multi-core computing.

Question 6: What impact is GPU computing having on these grid projects?

Answer: GPUs are taking the multi-core revolution to its extreme by packing dozens of cores onto a chip. Unlike CPUs, GPUs are designed to maximize floating-point performance. They have limitations, but they are still quite useful. We will have GPUs available on FutureGrid, and the National Science Foundation recently gave a grant to Oak Ridge National labs and Georgia Tech to put an experimental GPU-based system on TeraGrid.

Question 7: Do you believe that an exaflop supercomputer will ever be built?

Answer: Yes. In the eighties, scientists wondered whether a teraflop supercomputer would ever be built, and in the nineties, many doubted that a petaflop supercomputer would be feasible. But building a petaflop supercomputer has been essentially trivial, from a technology standpoint. So an exaflop supercomputer is bound to be built. How we will deal with memory, power, and interconnect issues is not obvious to me, but it seems reasonable that we can expect exaflop computers by 2020.

Question 8: Will cloud computing eventually obviate the need for organizations to buy and maintain their own computing infrastructures?

Answer: Cloud computing will be the dominant form of computing. There will be a mix of public clouds, run by companies like Amazon, and private clouds run by various organizations. The data centers that run the clouds will be placed wherever it is most cost-effective to operate them, so an American University may end up having its own cloud computing center running in Alaska or India.

Question 9: Researchers now have access to a million times more computing power than they had a couple of decades ago. What affect is this having on scientific disciplines?

Answer: This increased computing power is necessary for scientists to advance the scientific fields. Several decades ago researchers were limited to doing simple 2-d simulations because that was all that could be done on the megaflop computers of that era. Now we can do full scale, fine-grained, 3-d dynamic simulations, and this will lead to better weather forecasts, more accurate studies of the early universe, and better quantum dynamics models. High-performance computing is enabling a whole new set of simulations, and is absolutely essential for analyzing the torrent of data that is being continually produced. There is actually a Moore’s law for instruments – the instruments themselves are generating vastly more information.

Question 10: The recent roadrunner supercomputer cost $133 million. Are you concerned that the cost of building and maintaining supercomputers is becoming prohibitive?

Answer: Although the fastest supercomputers are getting more expensive, every other segment of computers is following Moore’s law and getting cheaper every year. An exaflop computer will be extremely expensive but most researchers won’t have access to it anyway. The majority of scientists will access vast yet inexpensive computing resources via cloud computing, which is a far more cost-effective computing paradigm for most tasks.

Question 11: To what extent is high-performance computing limited by bandwidth and interconnect issues?

Answer: It is true that current clouds run certain tasks inefficiently, due to the lack of specialized hardware. But communication overhead is not prohibitive for most computing tasks. Interconnect speeds are increasing and bandwidth costs are decreasing and I see these trends continuing indefinitely.

Question 12: What do you think the high-performance computing field will look like in 2019?

Answer: The real changes in the next decade will occur in the middle of the computing spectrum. At the high end, supercomputers operating at a 100 petaflops will be common, and at the low end, mobile computing is becoming ubiquitous. But the most important changes in the scientific computing field are coming in the middle, enabled primarily by cloud computing. The combination of cloud computing and multicore computing will allow scientists to routinely perform experiments and do tasks that aren’t feasible now. In a few years, every biologist will their own gene sequencing device, and everyone will have their genome sequenced. That alone will require prodigious quantities of computing resources. Huge data centers containing tens or even hundreds of thousands of servers will proliferate, and these data centers will become a dominant computing paradigm due to their cost effectiveness.

FURTHER READING

The future of scientific computing will be developed with the leadership of Indiana University and nine national and international partners as part of a $15 million project largely supported by a $10.1 million grant from the National Science Foundation (NSF). The award will be used to establish FutureGrid—one of only two experimental systems in the NSF Track 2 program that funds the most powerful, next-generation scientific supercomputers in the nation.

Futuregrid article at Teragrid site

Digital Science Center site

TeraGrid

TeraGrid is an open scientific discovery infrastructure combining leadership class resources at eleven partner sites to create an integrated, persistent computational resource.

Using high-performance network connections, the TeraGrid integrates high-performance computers, data resources and tools, and high-end experimental facilities around the country. Currently, TeraGrid resources include more than a petaflop of computing capability and more than 30 petabytes of online and archival data storage, with rapid access and retrieval over high-performance networks. Researchers can also access more than 100 discipline-specific databases. With this combination of resources, the TeraGrid is the world’s largest, most comprehensive distributed cyberinfrastructure for open scientific research.

TeraGrid is coordinated through the Grid Infrastructure Group (GIG) at the University of Chicago, working in partnership with the Resource Provider sites: Indiana University, the Louisiana Optical Network Initiative, National Center for Supercomputing Applications, the National Institute for Computational Sciences, Oak Ridge National Laboratory, Pittsburgh Supercomputing Center, Purdue University, San Diego Supercomputer Center, Texas Advanced Computing Center, and University of Chicago/Argonne National Laboratory, and the National Center for Atmospheric Research.

Online Tutorials and Training on Parallel Programming and other Supercomputer Topics

Online Tutorials and Training on Parallel Programming and other Supercomputer Topics

Teragrid user training resources and links

Teragrid presentations.