📄
Abstract
Prompt injection has emerged as a critical security threat for Large Language Models (LLMs), exploiting their inability to separate instructions from data within application contexts reliably. This paper provides a structured review of current attack vectors, including direct and indirect prompt injection, and highlights the limitations of existing defenses, with particular attention to the fragility of Known-Answer Detection (KAD) against adaptive attacks such as DataFlip. To address these gaps, we propose a novel, hybrid, multi-layered detection framework that operates in real-time. The architecture integrates heuristic pre-filtering for rapid elimination of obvious threats, semantic analysis using fine-tuned transformer embeddings for detecting obfuscated prompts, and behavioral pattern recognition to capture subtle manipulations that evade earlier layers. Our hybrid model achieved an accuracy of 0.974, precision of 1.000, recall of 0.950, and an F1 score of 0.974, indicating strong and balanced detection performance. Unlike prior siloed defenses, the framework proposes coverage across input, semantic, and behavioral dimensions. This layered approach offers a resilient and practical defense, advancing the state of security for LLM-integrated applications.