A full lifecycle AI application deployment dramatically builds on the dynamic nature of modern applications, requiring large-scale data ingestion, model training, inference, and active operations. It demands adaptive scaling, orchestrated infrastructure ecosystems, and robust security to manage evolving models, large datasets, and unique threat landscapes.
LLM01 | Prompt Injection |
Prompt injection vulnerabilities occur when user inputs manipulate an LLM’s behavior, bypassing safety measures or altering outputs in unintended ways, including through imperceptible inputs, posing risks such as guideline violations, unauthorized access, and harmful outputs, with mitigation requiring robust safeguards, ongoing updates, and training improvements. | |
LLM02 | Sensitive Information Disclosure |
LLMs risk exposing sensitive data like personally identifiable information (PII), financial records, or proprietary algorithms through outputs, requiring robust data sanitization, user opt-out options, and system prompt restrictions to mitigate privacy violations, intellectual property breaches, and unauthorized access—though these measures may be bypassed through prompt injection or similar attacks. | |
LLM03 | Supply Chain |
LLM supply chains face vulnerabilities from third-party models, tampering, and poisoning attacks, risking biased outputs, security breaches, and system failures. Emerging fine-tuning methods and on-device LLMs expand attack surfaces, emphasizing the need to address these risks alongside traditional data and model poisoning concerns. | |
LLM04 | Data & Model Poisoning |
Data poisoning manipulates training, fine-tuning, or embedding data to introduce vulnerabilities, backdoors, or biases, compromising model performance, security, and ethics. Risks include biased content, harmful outputs, or sleeper-agent backdoors. Open-source models also face threats like malicious code, emphasizing the need for vigilance in data integrity and external source verification. | |
LLM05 | Improper Output Handling |
Improper Output Handling occurs when LLM outputs are insufficiently validated or sanitized before downstream use, leading to risks like cross-site scripting (XSS), cross-side request forgery (XSRF), or privilege escalation. Contributing factors include indirect prompt injection, excessive LLM privileges, unvalidated inputs, improper output encoding, and insufficient monitoring, logging, or rate limiting in applications. | |
LLM06 | Excessive Agency |
Excessive agency occurs when LLM-based systems, granted autonomy to invoke functions or extensions, perform damaging actions due to ambiguous, manipulated, or malicious outputs. Triggers include hallucinations, prompt injections, or compromised agents. Root causes include excessive functionality, permissions, or autonomy, potentially impacting confidentiality, integrity, and availability across connected systems. | |
LLM07 | System Prompt Leakage |
System prompt leakage occurs when sensitive information in LLM system prompts, like credentials or permissions, is unintentionally exposed, enabling attacks. The core risk lies in storing sensitive data improperly and relying on LLMs for security functions instead of robust session management and authorization checks. | |
LLM08 | Vector & Embedding Weaknesses |
Vectors and embeddings vulnerabilities in RAG-based LLM systems can be exploited to inject harmful content, manipulate outputs, or access sensitive information due to weaknesses in their generation, storage, or retrieval processes, compromising security and performance. | |
LLM09 | Misinformation |
Misinformation from LLMs arises from hallucinations, biases, and incomplete data, leading to false but credible-sounding outputs. This poses risks like security breaches and reputational damage, worsened by overreliance on unverified LLM content in critical decisions or processes. | |
LLM10 | Unbounded Consumption |
Unbounded consumption in LLMs involves users conducting excessive, uncontrolled inferences, leading to risks like denial of service, economic losses, model theft, and service degradation. The high computational demands of LLMs make them vulnerable to resource exploitation and unauthorized usage, potentially disrupting service and depleting financial resources. |
ADC01 | Weak DNS Practices |
Poor DNS configurations, including the absence of DNSSEC, improper time-to-live (TTL) settings, or insecure dynamic DNS updates, increase the risk of DNS compromise and degradation of application performance. Ensuring robust DNS practices is essential to maintaining application integrity, availability, and performance. | |
ADC02 | Lack of Fault Tolerance & Resiliance |
Failing to incorporate sufficient fault tolerance can lead to cascading failures during high-stress conditions, resulting in service outages, degraded performance, and inability to maintain operations under load. This is heightened in AI application infrastructure due to unpredictable workload requirements across requests, users, and applications. Proper resilience planning helps avoid these scenarios. | |
ADC03 | Incomplete Observability |
Inadequate logging, monitoring, and alerting mechanisms delay the detection of configuration issues, performance bottlenecks, or breaches. Without observability, identifying and resolving issues proactively becomes difficult, leading to prolonged disruptions. Within the context of AI, the ability to understand unique workload requirements through observability can provide insight into addressing other top 10 challenges, particularly in multicloud deployments. | |
ADC04 | Insufficient Traffic Controls |
Failure to implement effective rate-limiting, throttling, and caching mechanisms can result in overloading backend services or DoS attacks. Mismanaged traffic controls can also lead to the unintentional exposure of sensitive data, including user credentials, because of ineffective cache management. For AI, the ability to control traffic is even more critical given the dynamic nature of traffic patterns and resource needs. Explore More | |
ADC05 | Unoptimized Traffic Steering |
Unoptimized traffic steering and resource allocation prevent the dynamic scaling of infrastructure in response to real-time demand. This is particularly true for AI applications processing multi-modal inputs, as processing needs for media are different than that of text-based data. As a result, applications may experience inconsistent performance and inefficiency during peak usage, impacting the overall user experience. Explore More | |
ADC06 | Inability to Handle Latency |
Applications suffering from latency bottlenecks or limited throughput are prone to delays in data processing pipelines. With AI, latency in the data processing pipeline can significantly degrade training time as well as performance of inferencing. This inability to efficiently manage large volumes of data can hinder performance, slow down insights, and disrupt critical applications that require immediate responses. Explore More | |
ADC07 | Incompatible Delivery Policies |
Rigid or incompatible delivery policies across multicloud, especially geo-distributed systems, result in latency issues and inconsistent service availability. Additionally, failure to adapt to local regulations and data sovereignty requirements can lead to compliance violations, impacting service reliability. | |
ADC08 | Lack of Security & Regulatory Compliance |
Inadequate support for regulatory compliance, including the failure to use FIPS-compliant devices or the inability to process encrypted traffic properly, can expose applications to vulnerabilities, increase operational costs, and even delay application deployment. The failure to adhere to data sovereignty regulations or properly validate AI responses may result in legal ramifications and breaches of trust. Explore More | |
ADC09 | Bespoke Applications Requirements |
Limited programmability and customization within the application delivery infrastructure can prevent it from meeting unique, application-specific needs. Systems lacking in flexibility may struggle to support complex, tailored requirements, leading to inefficiencies in application delivery. The speed with which AI applications and inferencing capabilities are evolving can quickly outpace the ability to deliver and secure AI application traffic. Explore More | |
ADC10 | Poor Resource Utilitzation |
Inefficient resource utilization due to mismatched distribution algorithms or inadequate health check mechanisms can result in wasted compute power and operational overhead. Systems that fail to optimize resource usage may experience higher costs, reduced performance, and overburdened infrastructure. Resource optimization for GPUs and CPUs is critical to ensuring efficiency of AI factories and scaling inferencing. Explore More |