Projects
REBECCA, a heavily SME-driven project, will democratize the development of novel edge AI systems. Towards this aim, REBECCA will develop a purely European complete Hardware (HW) and Software (SW) stack around a RISC-V CPU, which will provide significantly higher levels of a) performance (e.g., inferences per second), b) energy/power efficiency (e.g., inferences per...
Our team explores the programming approaches for homogeneous and heterogeneous architectures, including the use of accelerators and distributed clusters. We propose the OmpSs programming model, a task-based approach. OmpSs seeks to improve programmability of applications in such environments, without sacrificing performance. We actively participate in the OpenMP standard...
Mixed-Criticality Cyber-Physical Systems (MCCPS) deployed in critical domains like automotive and railway are starting to use Over The AirSoftware Updates (OTASU) for functionality improvement, bug fixing, and solving security vulnerabilities (among others). But, OTASU entailsseveral difficulties:
1) Safety including non-functional properties like...
MASTECS will bring innovative and exploitable technology for multicore processor timing analysis (MTA) to the market. It will be used by critical embedded software industries (focusing on automotive and avionics) to support advanced software functions (such as autonomous driving), which are competitive factors in every new product.
MASTECS will enable...
Existing HW/SW platforms for safety-critical systems suffer from limited performance and/or from lack flexibility due to building on specific proprietary components, which jeopardize their wide deployment across domains. While some research attempts have been done to overcome some of these limitations, their degree of success has been low due to missing flexibility and...
In the next decade, EU industries developing Critical Real-Time Embedded Systems (CRTES) (safety, mission or business critical) will face a once-in-a-life-time disruptive challenge caused by the transition to multicore processors and the advent of manycores, tantamount to complex networked systems. This challenge brings the opportunity to integrate multiple applications onto...
The most common interpretation of Moore's Law is that the number of components on a chip and accordingly the computer performance doubles every two years. This experimental law has been holding from its first statement in 1965 until today. At the end of the 20th century, when clock frequencies stagnated at ~3 GHz, and instruction level parallelism reached the phase of...
With top systems reaching the PFlop barrier, the next challenge is to understand how applications have to be implemented and be prepared for the ExaFlop target. Multicore chips are already here but will grow in the next decade to several hundreds of cores. Hundreds of thousands of nodes based on them will constitute the future exascale systems.
TEXT...
Design complexity and power density implications stopped the trend towards faster single-core processors. The current trend is to double the core count every 18 months, leading to chips with 100+ cores in 10-15 years. Developing parallel applications to harness such multicores is the key challenge for scalable computing systems.
The ENCORE project...
Data storage technology today faces many challenges, including performance inefficiencies, inadequate dependability and integrity guarantees, limited scalability, loss of confidentiality, poor resource sharing, and increased ownership and management costs. Given the importance of both direct-attached and networked storage systems for modern applications, it becomes imperative...