Homomorphically Encrypted Deep Learning

Primary tabs

We're working on making privacy-preserving deep learning a reality through homomorphic encryption

Summary

Deep learning (DL) is widely used to solve classification problems previously unchallenged, such as face recognition, and presents clear use cases for privacy requirements. Homomorphic encryption (HE) enables operations upon encrypted data, at the expense of vast data size increase. RAM sizes currently limit the use of HE on DL to severely reduced use cases. Recently emerged heterogeneous memory systems offer larger-than-ever RAM spaces, but their performance is highly dependent on data access patterns. This project aims at sparking a new class of system architectures for encrypted DL workloads, by eliminating or dramatically reducing data movements across memory/storage hierarchies and network, supported by heterogeneous memory technology, overcoming its current severe performance limitations. HomE intends to be a first-time enabler for the encrypted execution of large models that do not fit in DRAM footprints to execute local to accelerators, hundreds of DL models to run simultaneously, and large datasets to be run at high resolution and accuracy. Targeting these ground-breaking goals, HomE enters unexplored field resulting from the innovative convergence of several disciplines, where wide-ranging research is required to assess current and future feasibility. Its main challenge is to develop methodology capable of breaking through the existing software and hardware limitations. HomE proposes a holistic approach yielding highly impactful outcomes that include novel comprehensive performance characterisation, innovative optimisations upon current technology, and pioneering hardware proposals. HomE can spawn a paradigm shift that will revolutionise the convergence of the machine learning and cryptography disciplines, filling a gap of knowledge and opening new horizons such as DL training on HE, currently too demanding even for DRAM. HomE, based on solid evidence, will unveil the great unknown of whether large memory pools formed of heterogeneous memory systems are a practical enabler for encrypted DL workloads.

Objectives

  • To enable, for the first time, practical HEd DNN inference in production environments

Media