Projectes

Showing 521 - 530 results of 555

Large scale simulations are the third pillar of science today alongside theory and experiment. They produce scientific insights, technological advances, and solve problems in many fields of society. Their tools are high-end computers and effective software. PRACE, the Partnership for Advanced Computing, has been created as a not for profit...

Several research communities in Europe exploit e-Infrastructures, sharing data and computing resources with Grid and Supercomputing technology. However the inherent complexity of these technologies has limited their wider adoption and their long term sustainability: designing, developing and operating a computing infrastructure for an e-Science community remains challenging...

With top systems reaching the PFlop barrier, the next challenge is to understand how applications have to be implemented and be prepared for the ExaFlop target. Multicore chips are already here but will grow in the next decade to several hundreds of cores. Hundreds of thousands of nodes based on them will constitute the future exascale systems.

TEXT...

Interoperability, which brings major benefits for enterprise and science, is key for the pervasive adoption of grids & clouds. Interoperability between existing grids and clouds was of primary importance for the European Union at the time of this project. Many of the policy issues required to achieve interoperability with DCIs across the world are already being explored...

With the challenges of service and infrastructure providers as the point of departure, OPTIMIS focused on open, scalable and dependable service platforms and architectures that allowed flexible and dynamic provision of advanced services. The OPTIMIS innovations can be summarized as a combination of technologies to create a dependable ecosystem of providers and consumers that...

The use of High Performance Computing (HPC) is commonly recognized as a key strategic element in improving understanding of complex phenomena, in both research and industry. The constant growth of generated data - Big Data - and the computing capabilities of extreme systems are leading to a new generation of computers composed of millions of heterogeneous cores which will...

Let’s imagine that we have developed a drug with very good potency for some cancer therapy. However, with time, many patients start developing resistance to the drug due to some particular point mutation as a result of the high metabolism of the cancer cells. Or let’s imagine that some specific treatment for a virus has lost its efficacy due to some viral...

Design complexity and power density implications stopped the trend towards faster single-core processors. The current trend is to double the core count every 18 months, leading to chips with 100+ cores in 10-15 years. Developing parallel applications to harness such multicores is the key challenge for scalable computing systems.

The ENCORE project...

There is an ever-increasing demand both for new functionality and for reduced development and production costs for all kinds of Critical Real-Time Embedded (CRTE) systems (safety, mission or business critical). Moreover, new functionality demands can only be delivered by more complex software and aggressive hardware acceleration features like memory hierarchies and multi-core...

Dataflow parallelism is key to reach power efficiency, reliability, efficient parallel programmability, scalability, data bandwidth. In this project we proposed dataflow both at task level and inside the threads, to offload and manage accelerated codes, to localize the computation, for managing the fault information with appropriate protocols, to easily migrate code to the...

Pàgines