Whether analysing complex molecules, searching for new medically active substances, calculating the global climate or modelling astronomical events – computer simulations are becoming an indispensable tool in an increasing number of scientific fields. New more powerful supercomputers enable more realistic and more detailed simulations of complex global processes, whereas at the same time it is becoming more and more difficult for researchers to monitor program execution and to identify sources of error or performance bottlenecks. Today the fastest supercomputers have tens or hundreds of thousands of processors working in parallel which, if possible, have to be utilized uniformly during the course of a simulation. In order to help users optimize performance more easily, Russian and European experts have established a new project which will for the first time consider all aspects in a performance analysis – ranging from running applications down to the hardware actually used.
Whether analysing complex molecules, searching for new medically active substances, calculating the global climate or modelling astronomical events – computer simulations are becoming an indispensable tool in an increasing number of scientific fields. New more powerful supercomputers enable more realistic and more detailed simulations of complex global processes, whereas at the same time it is becoming more and more difficult for researchers to monitor program execution and to identify sources of error or performance bottlenecks. Today the fastest supercomputers have tens or hundreds of thousands of processors working in parallel which, if possible, have to be utilized uniformly during the course of a simulation. In order to help users optimize performance more easily, Russian and European experts have established a new project which will for the first time consider all aspects in a performance analysis – ranging from running applications down to the hardware actually used.