Entropy Based Obfuscation for Defending Attention Cache in Shared LLMs

Kansal, Saurabh and Kejriwal, Deepak (2025) Entropy Based Obfuscation for Defending Attention Cache in Shared LLMs. International Journal of Innovative Science and Research Technology, 10 (9): 25sep1140. pp. 2241-2256. ISSN 2456-2165

Abstract

Large Language Models (LLMs) have become an indispensable part of research, business, and real-world use in a short period, providing unequalled capabilities in natural language understanding and generation. Nonetheless, the implementation of such models in shared or multi-tenant frameworks poses grave security risks, especially that of sensitive information being leaked via the attention key-value (KV) memory. The caches used in side-channel attacks can reveal prompts, embedding’s, compromise privacy, and confidence in the services of the LLM. In order to resolve this issue, this paper suggests the use of entropy-based obfuscation framework that injects controlled randomness into the cached states thus rendering access patterns unpredictable without affecting accuracy. The framework dynamically modulates the level of perturbation using the Shannon and Renyi entropy as the guiding metrics in order to achieve the trade-off between privacy and system performance. The experimental outcomes of the multi-tenant deployments show that entropy-based obfuscation is an effective tool reducing prompt leakage by paying a relatively small computational cost. The significance of entropy-based defenses in this study is emphasized because this method is a practical and scalable solution to improving the resilience of LLMs. The study provides a new line of research that aims to protect the collaboration of AI environments by incorporating information-theoretic metrics into model protection.

Documents
2835:17096
[thumbnail of IJISRT25SEP1140.pdf]
Preview
IJISRT25SEP1140.pdf - Published Version

Download (885kB) | Preview
Information
Library
Metrics

Altmetric Metrics

Dimensions Matrics

Statistics

Downloads

Downloads per month over past year

View Item