Projects

Showing 81 - 90 results of 110

The consortium of this European Training Network (ETN) "BigStorage: Storage-based Convergence between HPC and Cloud to handle Big Data will train future data scientists in order to enable them and us to apply holistic and interdisciplinary approaches for taking advantage of a data overwhelmed world, which requires HPC and Cloud infrastructures with a redefinition of storage...

The SECURED architecture created a trusted and virtualized execution environment allowing different actors (e.g. single users, corporate ICT managers, network providers) to install on-demand and execute multiple security applications on the network edge device to protect the traffic of a specific user. This approach reduced the load onto the mobile devices, guaranteeing...

The recent technological advancements and market trends are causing an interesting phenomenon towards the convergence of High-Performance Computing (HPC) and Embedded Computing (EC) domains. On one side, new kinds of HPC applications are being required by markets needing huge amounts of information to be processed within a bounded amount of time. On the other side, EC systems...

The Mont-Blanc project aims to develop a European Exascale approach leveraging on commodity power-efficient embedded technologies. The project has developed a HPC system software stack on ARM, and will deploy the first integrated ARM-based HPC prototype by 2014, and is also working on a set of 11 scientific applications to be ported and tuned to the prototype system.

The project DEEP-ER (DEEP-Extended Reach) addresses two significant Exascale challenges: the growing gap between I/O bandwidth and compute speed, and the need to significantly improve system resiliency.

DEEP-ER will extend the Cluster-Booster architecture of the Dynamical Exascale Entry...

Data-centres form the central brains and store for the Information Society and are a key resource for innovation and leadership. The key challenge has recently moved from just delivering the required performance, to include consuming reduced energy and lowering cost of ownership. Together, these create an inflection point that provides a big opportunity for Europe, which...

The most common interpretation of Moore's Law is that the number of components on a chip and accordingly the computer performance doubles every two years. This experimental law has been holding from its first statement in 1965 until today. At the end of the 20th century, when clock frequencies stagnated at ~3 GHz, and instruction level parallelism reached the phase of...

IS-ENES2 is the second phase project of the distributed e-infrastructure of models, model data and metadata of the European Network for Earth System Modelling (ENES). This network gathers together the European modelling community working on understanding and predicting climate variability and change. ENES organizes and supports European contributions to international...

AXLE focused on automatic scaling of complex analytics, while addressing the full requirements of real data sets. Real data sources have many difficult characteristics. Sources often start small and can grow extremely large as business/initiatives succeed, so the ability to grow seamlessly and automatically is at least as important as managing large data volumes once you know...

The increasing power and energy consumption of modern computing devices is perhaps the largest threat to technology minimization and associated gains in performance and productivity. On the one hand, we expect technology scaling to face the problem of “dark silicon” (only segments of a chip can function concurrently due to power restrictions) in the near future...

Pages