With machine learning applications, such as real-time language translation chatbots becoming increasingly data-heavy, there’s an imminent need for specialized hardware that can handle this computational weight swiftly. The creation of these components, dubbed deep neural network accelerators, is complex, especially when embedding cryptographic protections against data breaches.
Addressing this challenge, researchers at MIT have crafted a revolutionary search tool, SecureLoop, to pinpoint optimal designs for these accelerators. This tool ensures both performance enhancement and robust data protection.
How SecureLoop Changes the Game
Deep neural network accelerators speed up computations by parallelizing operations across the network’s layers. However, as most of this data is stored off-chip, it becomes susceptible to external threats. SecureLoop aims to curtail these threats by incorporating encryption and data authentication techniques.
Kyungmi Lee, the study’s lead author, highlighted the misconceptions surrounding the introduction of cryptographic operations. Contrary to belief, such operations can profoundly shape the design space of energy-efficient accelerators.
Making Encryption More Efficient
Currently, the sizes of data tiles and authentication blocks don’t align, causing inefficiencies. This misalignment means the accelerator might fetch redundant data, leading to increased energy usage and computational lag. Adding cryptographic operations compounds these costs.
The brilliance of SecureLoop lies in its efficiency. The team adapted an existing tool, Timeloop, integrating a model that accounted for the encryption and authentication requirements. They then mathematically reformulated the problem, allowing SecureLoop to determine the best authentication block size without tediously sifting through every possibility.
Kyungmi Lee stressed the importance of this refined approach: “By smartly assigning the cryptographic block, only a minimal amount of surplus data is fetched.”
Impressive Results & Future Outlook
In simulations, SecureLoop outperformed: it was 33.2% faster and showcased an improved energy delay product by 50.2% compared to methods that overlooked security. The team also realized that reserving more chip space for the cryptographic engine, at the cost of on-chip memory, can elevate performance.
Going forward, the team plans to make accelerators resilient to side-channel attacks, where attackers could potentially exploit physical hardware. Furthermore, they’re expanding SecureLoop’s applicability to diverse computations.
This innovative work, supported by Samsung Electronics and the Korea Foundation for Advanced Studies, will be showcased at the upcoming IEEE/ACM International Symposium on Microarchitecture, promising a new era of secure and efficient AI hardware design.