Description
Data storage technology today faces many challenges, including performance inefficiencies, inadequate dependability and integrity guarantees, limited scalability, loss of confidentiality, poor resource sharing, and increased ownership and management costs. Given the importance of both direct-attached and networked storage systems for modern applications, it becomes imperative to address these issues. Multicore CPUs offer the promise of dealing with many of the underlying limitations of today’s I/O architectures. However, this requires careful consideration of architectural and systems issues and complex interactions in the I/O stack, all the way from the application to the disk.
In this proposal we target three major challenges:
- dealing with performance and scalability issues of the I/O stack on multicore architectures
- addressing I/O performance and dynamic resource management issues in virtualised, single-host environments
- examining onloading and off-loading tradeoffs for advanced functions that are becoming essential in modern storage systems, e.g. compression, protection, encryption, error correction; on-loading uses free cores of multicore CPUs, whereas off-loading uses specialized architectural features, such as heterogeneous units.
This project aims at analyzing and addressing these challenges throughout the I/O path. Our approach breaks down the I/O stack in four important layers:
- application and middleware
- virtual machine
- host operating system, and
- embedded storage controller
The proposed work analyzes and addresses the inefficiencies associated with these layers on multicore CPUs, by designing an I/O stack that minimizes unnecessary overheads and scales with the number of cores. Since storage systems are perhaps the most critical component of modern computing infrastructures, the proposed work will benefit many I/O-intensive applications that support activities of businesses, organisations, and individuals alike.