------------------------------------------------------
Advances in computer technology have revolutionalised the transfer of information rendering national borders to critical new knowledge. As the pace of new knowledge and discoveries picks up, the speed at which knowledge can be accessed becomes a decisive factor in the commercial success of technologies. Computing has become a symbol of our creativity and productivity along with being a expressive barometer of the competitive position of the organizations and the country in the world of knowledge economy. Supercomputers, in particular, are extremely important to design and manufacturing processes in diverse industries like oil exploration, aeronautics and aerospace, energy, transport, automobiles, pharmaceuticals and electronics to name few of them.
(With the acknowledgement to Copyright of Niels Drost, 2010 for the work Real-World Distributed Supercomputing published by NOW):- “Ever since the invention of the computer, users have desired higher and higher performance. For an average user the solution was simply a matter of patience: each newer model computer has been faster than the previous generation for as long as computers have existed. However, for some users this was not enough, as they required more compute power than any normal machine could over. Examples of high performance computing users are meteorologists performing weather predictions using complex climate models, astronomers running simulations of galaxies, and medical researchers analyzing DNA sequences.
To explain some of the major challenges encountered by high performance computing users, we use an analogy: making coffee. What if I was responsible for making coffee for a group of people, for instance a group of scientists on break during a conference? If the group is small enough, I could use my own coffee maker, analogous to using my own computer to do a computation. However, this will not work if the group is too large, as it would take too long, leading to a large queue of thirsty scientists. I could also brew the coffee in advance, but that would lead to stale and probably cold coffee.
The obvious solution to my problem is to get a bigger, faster, coffee maker. I could go out and buy an industrial-size coffee maker, like the one in a cafeteria, or even a coffee vending machine. Unfortunately, these are very expensive. In computing, large, fast, expensive computers are called supercomputers. Fortunately, several alternatives exist that will save money. Instead of a single big coffee maker, I could use a number of smaller machines (a cluster in computing terms). I could also rent a coffee maker (cloud computing), or even borrow one (grid computing). In reality, I would probably use a combination of these alternatives, for instance by using my own coffee maker, borrowing a few, and renting a machine. Although combining machines from different sources is the cheapest solution, it may cause problems. For one, different coffee machines need different types of coffee, such as beans, ground coffee, pads, or capsules. Moreover, these different machines all need to be operated in different ways, produce coffee at different speeds, and may even produce a different result (for instance, espresso). In the end, I may be spending a considerable amount of time and effort orchestrating all these coffee makers.
Recently, cloud computing has emerged as a high-performance compute platform, offering applications a homogeneous environment by using virtualization mechanisms to hide most differences in the underlying hardware. Unfortunately, not all resources available to a user offer cloud services. Also, combining resources of multiple cloud systems is far from trivial. To use all resources available to a user, software is needed which easily combines as many resources as possible into one coherent computing platform.”
Now my story begins about what India wish to do about the supercomputing in coming few years. :- Supercomputers are used for highly calculation-intensive tasks such as problems involving quantum physics, weather forecasting, climate research, molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), physical simulations (such as simulation of airplanes in wind tunnels, simulation of the detonation of nuclear weapons. Apart from these areas, other notable frontiers of research where supercomputing is being used are: In ccomputational fluid dynamics for optimization of turbines and wings, noise reduction and air conditioning in trains, in fusion for plasma in a future fusion reactor (ITER), in Astrophysics for studying the origin and evolution of stars and galaxies, in solid state physics for superconductivity, surface properties and semiconductors, in Geophysics for earth quake scenarios, in Chemistry for catalytic reactions, in medicine and medical engineering to simulate and control blood flow, neurysms, air conditioning of operating theatres, in Biophysics to research on properties of viruses, genome analysis and in climate research for modeling, currents in oceans among the many other applications.
Relevant here is the distinction between capability computing and capacity computing, as defined by Graham et al. Capability computing is typically thought of as using the maximum computing power to solve a large problem in the shortest amount of time. Often a capability system is able to solve a problem of a size or complexity that no other computer can. Capacity computing in contrast is typically thought of as using efficient cost-effective computing power to solve somewhat large problems or many small problems or to prepare for a run on a capability system.
In 2005 India was on 4th position in the world in terms of supercomputing capacity, in 2010 country slided to 24th position. We always take pride in the fact that when supercomputing technology was denied to India in the 90s, our brilliant engineers at CDAC made us proud of them by creating PARAM for us. Even in 2007 we had third largest supercomputer and since then India’s performance in this strategic sector has downgraded such that time has arrived to ask question are we languishing far behind what we envisioned? We need to improve on the fronts of capacity (human and computer/networking resources), capability (manpower skills) and identifying the challenging areas where we wish to focus our attention for problem solving in either R&D or other applications. So, as things stand today, out of first top performing supercomputers in the world, USA has 274 in list, China has 42 and imagine India four.
When it comes to building and using of supercomputing facility in India there are many serious stakeholders: DRDO, DAE, Dept. of Space, DST, DBT, financial sector, E-governance initiatives, international collaborative projects like ITER, telecom and others. With the increasing realization about the technological and financial viability to use network as a computer rather than just building network of computers use of distributed computing has been envisioned for massive applications. Moreover, Indian strategy is focused on using supercomputer more oriented towards memory centric operations compared to processing and analytical operations.
The transformational journey of supercomputing strategy is based on fourth paradigm in supercomputing going from computer centric to data centric. Initiatives like National Knowledge Network will only add to our experience about creating, hosting, sharing, transmitting, broadcasting and conserving the huge amount of data in the back end. Here, potential of integrated capacity of large number of computers in a network can be visualized yielding enormous amount of outcome which increases exponentially over a short period of time. And it is very difficult to handle the exponentially increased high speeds and high performance computing at greater degrees of operation. Therefore sharing has emerged as a new champion of performing very critical and complex tasks. Due to sharing, seamless integration of data and high precision experiments is possible.
It is desirable and positive step that country is thinking seriously to invest in the frontiers of supercomputing and thus consolidating the initiatives in the emerging areas of the research and development, real challenge lies in the developing the absorptive capacity for the large amount of funds being released from the centre and complementing them with attraction of right mix of talented people with motivation by having good infrastructure in place in time. Largest number in the universe is Avogadro’s number which is around ten to the order of 23. This was the strong belief of scientific community up till now. But with the help of large supercomputing facilities it is possible now to calculate the numbers beyond the degree of Avogadro’s number. In a way, from the perspectives of growth of fundamental science and addressing the issues of inclusive society, we need supercomputing.
--------------------------------------------------------------
Advances in computer technology have revolutionalised the transfer of information rendering national borders to critical new knowledge. As the pace of new knowledge and discoveries picks up, the speed at which knowledge can be accessed becomes a decisive factor in the commercial success of technologies. Computing has become a symbol of our creativity and productivity along with being a expressive barometer of the competitive position of the organizations and the country in the world of knowledge economy. Supercomputers, in particular, are extremely important to design and manufacturing processes in diverse industries like oil exploration, aeronautics and aerospace, energy, transport, automobiles, pharmaceuticals and electronics to name few of them.
(With the acknowledgement to Copyright of Niels Drost, 2010 for the work Real-World Distributed Supercomputing published by NOW):- “Ever since the invention of the computer, users have desired higher and higher performance. For an average user the solution was simply a matter of patience: each newer model computer has been faster than the previous generation for as long as computers have existed. However, for some users this was not enough, as they required more compute power than any normal machine could over. Examples of high performance computing users are meteorologists performing weather predictions using complex climate models, astronomers running simulations of galaxies, and medical researchers analyzing DNA sequences.
To explain some of the major challenges encountered by high performance computing users, we use an analogy: making coffee. What if I was responsible for making coffee for a group of people, for instance a group of scientists on break during a conference? If the group is small enough, I could use my own coffee maker, analogous to using my own computer to do a computation. However, this will not work if the group is too large, as it would take too long, leading to a large queue of thirsty scientists. I could also brew the coffee in advance, but that would lead to stale and probably cold coffee.
The obvious solution to my problem is to get a bigger, faster, coffee maker. I could go out and buy an industrial-size coffee maker, like the one in a cafeteria, or even a coffee vending machine. Unfortunately, these are very expensive. In computing, large, fast, expensive computers are called supercomputers. Fortunately, several alternatives exist that will save money. Instead of a single big coffee maker, I could use a number of smaller machines (a cluster in computing terms). I could also rent a coffee maker (cloud computing), or even borrow one (grid computing). In reality, I would probably use a combination of these alternatives, for instance by using my own coffee maker, borrowing a few, and renting a machine. Although combining machines from different sources is the cheapest solution, it may cause problems. For one, different coffee machines need different types of coffee, such as beans, ground coffee, pads, or capsules. Moreover, these different machines all need to be operated in different ways, produce coffee at different speeds, and may even produce a different result (for instance, espresso). In the end, I may be spending a considerable amount of time and effort orchestrating all these coffee makers.
Recently, cloud computing has emerged as a high-performance compute platform, offering applications a homogeneous environment by using virtualization mechanisms to hide most differences in the underlying hardware. Unfortunately, not all resources available to a user offer cloud services. Also, combining resources of multiple cloud systems is far from trivial. To use all resources available to a user, software is needed which easily combines as many resources as possible into one coherent computing platform.”
Now my story begins about what India wish to do about the supercomputing in coming few years. :- Supercomputers are used for highly calculation-intensive tasks such as problems involving quantum physics, weather forecasting, climate research, molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), physical simulations (such as simulation of airplanes in wind tunnels, simulation of the detonation of nuclear weapons. Apart from these areas, other notable frontiers of research where supercomputing is being used are: In ccomputational fluid dynamics for optimization of turbines and wings, noise reduction and air conditioning in trains, in fusion for plasma in a future fusion reactor (ITER), in Astrophysics for studying the origin and evolution of stars and galaxies, in solid state physics for superconductivity, surface properties and semiconductors, in Geophysics for earth quake scenarios, in Chemistry for catalytic reactions, in medicine and medical engineering to simulate and control blood flow, neurysms, air conditioning of operating theatres, in Biophysics to research on properties of viruses, genome analysis and in climate research for modeling, currents in oceans among the many other applications.
Relevant here is the distinction between capability computing and capacity computing, as defined by Graham et al. Capability computing is typically thought of as using the maximum computing power to solve a large problem in the shortest amount of time. Often a capability system is able to solve a problem of a size or complexity that no other computer can. Capacity computing in contrast is typically thought of as using efficient cost-effective computing power to solve somewhat large problems or many small problems or to prepare for a run on a capability system.
In 2005 India was on 4th position in the world in terms of supercomputing capacity, in 2010 country slided to 24th position. We always take pride in the fact that when supercomputing technology was denied to India in the 90s, our brilliant engineers at CDAC made us proud of them by creating PARAM for us. Even in 2007 we had third largest supercomputer and since then India’s performance in this strategic sector has downgraded such that time has arrived to ask question are we languishing far behind what we envisioned? We need to improve on the fronts of capacity (human and computer/networking resources), capability (manpower skills) and identifying the challenging areas where we wish to focus our attention for problem solving in either R&D or other applications. So, as things stand today, out of first top performing supercomputers in the world, USA has 274 in list, China has 42 and imagine India four.
When it comes to building and using of supercomputing facility in India there are many serious stakeholders: DRDO, DAE, Dept. of Space, DST, DBT, financial sector, E-governance initiatives, international collaborative projects like ITER, telecom and others. With the increasing realization about the technological and financial viability to use network as a computer rather than just building network of computers use of distributed computing has been envisioned for massive applications. Moreover, Indian strategy is focused on using supercomputer more oriented towards memory centric operations compared to processing and analytical operations.
The transformational journey of supercomputing strategy is based on fourth paradigm in supercomputing going from computer centric to data centric. Initiatives like National Knowledge Network will only add to our experience about creating, hosting, sharing, transmitting, broadcasting and conserving the huge amount of data in the back end. Here, potential of integrated capacity of large number of computers in a network can be visualized yielding enormous amount of outcome which increases exponentially over a short period of time. And it is very difficult to handle the exponentially increased high speeds and high performance computing at greater degrees of operation. Therefore sharing has emerged as a new champion of performing very critical and complex tasks. Due to sharing, seamless integration of data and high precision experiments is possible.
It is desirable and positive step that country is thinking seriously to invest in the frontiers of supercomputing and thus consolidating the initiatives in the emerging areas of the research and development, real challenge lies in the developing the absorptive capacity for the large amount of funds being released from the centre and complementing them with attraction of right mix of talented people with motivation by having good infrastructure in place in time. Largest number in the universe is Avogadro’s number which is around ten to the order of 23. This was the strong belief of scientific community up till now. But with the help of large supercomputing facilities it is possible now to calculate the numbers beyond the degree of Avogadro’s number. In a way, from the perspectives of growth of fundamental science and addressing the issues of inclusive society, we need supercomputing.
--------------------------------------------------------------
No comments:
Post a Comment