Projects

Showing 121 - 130 results of 147

The project involved the development of mathematical models and their implementation as software code for high performance computing clusters. The physical problem studied involves two related topics: particle deposition and solute absorption in respiratory airways, and tumour metastasis in arterioles and capillaries. The aim was to couple micro-scale phenomena to large 3D...

The most common interpretation of Moore's Law is that the number of components on a chip and accordingly the computer performance doubles every two years. This experimental law has been holding from its first statement in 1965 until today. At the end of the 20th century, when clock frequencies stagnated at ~3 GHz, and instruction level parallelism reached the phase of...

IS-ENES2 is the second phase project of the distributed e-infrastructure of models, model data and metadata of the European Network for Earth System Modelling (ENES). This network gathers together the European modelling community working on understanding and predicting climate variability and change. ENES organizes and supports European contributions to international...

AXLE focused on automatic scaling of complex analytics, while addressing the full requirements of real data sets. Real data sources have many difficult characteristics. Sources often start small and can grow extremely large as business/initiatives succeed, so the ability to grow seamlessly and automatically is at least as important as managing large data volumes once you know...

The increasing power and energy consumption of modern computing devices is perhaps the largest threat to technology minimization and associated gains in performance and productivity. On the one hand, we expect technology scaling to face the problem of “dark silicon” (only segments of a chip can function concurrently due to power restrictions) in the near future...

The use of High Performance Computing (HPC) is commonly recognized a key strategic element both in research and industry for an improvement of the understanding of complex phenomena. The constant growth of generated data -Big Data- and computing capabilities of extreme systems lead to a new generation of computers composed of millions of heterogeneous cores which will provide...

New safety standards, such as ISO 26262, present a challenge for companies producing safety-relevant embedded systems. Safety verification today is often ad-hoc and manual; it is done differently for digital and analogue, hardware and software.

The VeTeSS project developed standardized tools and methods for verification of the robustness of...

The project COPA-GT was structured to provide training of a multi-disciplinary and intersectorial nature for young Fellows in Europe in the field of propulsion and electric power generation systems.

The young researchers obtained expertise in gas turbine engine (GT) design, based on fluid and structural mechanics, combustion, acoustics and heat...

The grand challenge of Exascale computing, a critical pillar for global scientific progress, requires co-designed architectures, system software and applications. Massive worldwide collaboration of leading centres, already underway, is crucial to achieve pragmatic, effective solutions. Existing funding programs do not support this complex multidisciplinary effort. Severo...

DEEP developed a novel, Exascale-enabling supercomputing platform along with the optimisation of a set of grand-challenge codes simulating applications highly relevant for Europe's science, industry and society.

The DEEP System realised a Cluster Booster Architecture that can cope with the limitations purported by Amdahl's Law. It served as...

Pages