Projects

Showing 71 - 80 results of 87

AXLE focused on automatic scaling of complex analytics, while addressing the full requirements of real data sets. Real data sources have many difficult characteristics. Sources often start small and can grow extremely large as business/initiatives succeed, so the ability to grow seamlessly and automatically is at least as important as managing large data volumes once you know...

The increasing power and energy consumption of modern computing devices is perhaps the largest threat to technology minimization and associated gains in performance and productivity. On the one hand, we expect technology scaling to face the problem of “dark silicon” (only segments of a chip can function concurrently due to power restrictions) in the near future...

The use of High Performance Computing (HPC) is commonly recognized a key strategic element both in research and industry for an improvement of the understanding of complex phenomena. The constant growth of generated data -Big Data- and computing capabilities of extreme systems lead to a new generation of computers composed of millions of heterogeneous cores which will provide...

New safety standards, such as ISO 26262, present a challenge for companies producing safety-relevant embedded systems. Safety verification today is often ad-hoc and manual; it is done differently for digital and analogue, hardware and software.

The VeTeSS project developed standardized tools and methods for verification of the robustness of...

The grand challenge of Exascale computing, a critical pillar for global scientific progress, requires co-designed architectures, system software and applications. Massive worldwide collaboration of leading centres, already underway, is crucial to achieve pragmatic, effective solutions. Existing funding programs do not support this complex multidisciplinary effort. Severo...

DEEP developed a novel, Exascale-enabling supercomputing platform along with the optimisation of a set of grand-challenge codes simulating applications highly relevant for Europe's science, industry and society.

The DEEP System realised a Cluster Booster Architecture that can cope with the limitations purported by Amdahl's Law. It served as...

There is a continued need for higher compute performance: scientific grand challenges, engineering, geophysics, bioinformatics, etc. However, energy is increasingly becoming one of the most expensive resources and the dominant cost item for running a large supercomputing facility. In fact the total energy cost of a few years of operation can almost equal the cost of the...

The main goal of EUBrazilOpenBio was to deploy an e-Infrastructure of open access resources (data, tools, services), to make significant strides towards supporting the needs and requirements of the biodiversity scientific community. This data e-Infrastructure resulted from the federation and integration of substantial individual existing data, cloud, and grid EU and Brazilian...

Life Science developed into one of the largest e-Infrastructure users in Europe, in part due to the ever-growing amount of biological data. At the time of the project, modern drug design typically included both sequence bioinformatics, in silico virtual screening, and free energy calculations, e.g. of drug binding. This development has accelerated tremendously, and has put...

The use of High Performance Computing (HPC) is commonly recognized as a key strategic element in improving understanding of complex phenomena, in both research and industry. The constant growth of generated data - Big Data - and the computing capabilities of extreme systems are leading to a new generation of computers composed of millions of heterogeneous cores which will...

Pages