Know the Threats, save your LLM: The ultimate OWASP LLM Top 10 Breakdown

Ready to elevate LLM security but don’t know where to start? This article is the ultimate OWASP top 10 breakdown. A great way to build a foundation for secure LLM usage in 2026!
LLM applications have rapidly become a central part of our digital world: from user interactions to internal operations, LLMs fit the use case. Along with these LLM use cases come various cybersecurity issues specific to this type of AI technology. And this is where OWASP comes in with their top 10. Hot in 2025, still a threat in 2026!
The OWASP top 10 for LLM applications in 3 simple points:
- Collection of 10 threats specific to LLM applications
- Community driven
- For each threat: description, examples, prevention and mitigation, references
The 2025 version includes:
- Prompt injection
- Sensitive information disclosure
- Supply chain
- Data and model poisoning
- Improper output handling
- Excessive Agency
- System Prompt leakage
- Vector and embedding weaknesses
- Misinformation
- Unbounded consumption
Already experiencing cold sweat after reading this list. Not to worry: a lot of these threats are just “classic” security problems in a new context. By the end of this article, you will be no stranger to the main risks that might threaten your next LLM application, whether they’re analogous to “classic” security threats or stem from the unique and non-deterministic way LLMs operate.
Let’s start easy with number 9, Misinformation. Ever asked your favorite chatbot something, and thought by yourself: “Well, this can’t be right”? Great critical thinking on your part! When a user becomes too reliant on their chatbot and doesn’t verify its accuracy, misinformation becomes especially dangerous. It is thus essential to empower users by raising awareness through training, UI design and risk communication. Additionally, an LLM becomes less prone to hallucinations and biases by incorporating a Retrieval-Augmented Generation (RAG) system(1) and finetuning.

That doesn’t sound too hard, add RAG to your LLM system and all your problems are gone, right? Wrong! RAG comes with its own weaknesses, namely Vector and embedding weaknesses (#8). RAG systems retrieve data from external knowledge bases, which are stored in vector databases. So instead of being human-readable, the data is represented mathematically and thus easier to understand for LLM models. However, the way vectors and embeddings are generated, stored, or retrieved can be exploited by malicious actions to allow unauthorized access, data poisoning and embedding inversion. This is mitigated by enforcing least privilege for vector databases.
Threat #8 (Vector and embedding weaknesses) is quite challenging to understand, so let’s cleanse our palette with yet another easy-to-understand threat: Unbounded Computation (#10). This is an umbrella term for attacks that abuse the absence of constraints on LLM usage and output. Whether this means resource depletion (Denial of Service or DoS), budget overruns, degraded service or model replication, financial losses always follow as a consequence. Similar to “classic” security DoS, rate limiting does the trick as well as input validation.
Let’s jump back to the top to our #1 threat: Prompt Injection. Prompt injection applies when an LLM prompt includes any form of external input. For the security engineers reading this: yes it does sound a lot like SQL injection. So you won’t be surprised that mitigating prompt injection consists of constraining behavior, filtering input and output and validating outputs.

Don’t treat this as a silver bullet though: effective prevention requires ongoing updates to a model’s training and safety mechanisms. Prompt injection also serves as a basis for many of the other threats. For an LLM, accepting different kinds of inputs such as videos, images and documents (so called multi-model prompts) adds to the complexity.
Sensitive Information Disclosure (#2) applies when the LLM has access to sensitive information, such as personal identifiable information (PII), confidential business data and credentials, and causes the LLM to (inadvertently) output sensitive information. There are three sources an LLM can gain this sensitive information, from:
- Earlier user input which gets included in training
- A Retrieval-Augmented Generation (RAG) system
- API and tool calls when dealing with an LLM agent
As with “classic” security, access control is key, as is sanitization of user input and user education.
Supply chain (#3) applies when a third-party model or finetuning is used. Again, mitigating these risks is no different as done in software supply chain security: understanding and knowing what you are using via a Software Bill of Material (SBOM), verifying sources, monitoring for 0-days, updating & patch policy.
Data and Model poisoning (#4) happens when training data is modified or augmented. By introducing bias or questionable ethical behavior, the actor provides backdoors and includes harmful code or malware (malicious pickling) . Not something you want to happen! This can be mitigated by infrastructure controls to prevent tampering, tracking data origin and detecting signs of poisoning, and testing robustness with red-teaming.
Another LLM threat that is very similar to “classic” security problems is Improper Output handling (#5) where the output of an LLM is used by a downstream system or application. This may lead to “classic” security vulnerabilities such as Cross-site scripting (XSS), Cross-Side Request Forgery (CSRF), SSRF Server Side Request Forgery (SSRF), privilege escalation and Remote Code Execution (RCE). It is thus imperative to always treat model output as external user input by using techniques such as input validation, sanitization and escaping. Exploitation attempts can be detected by implementing robust logging and monitoring systems.

Now let’s take another look at a threat that arises from the unique agentic nature of LLMs, Excessive Agency (#6). Agentic LLMs take actions such as call functions. This becomes a concern when these calls enable damaging actions based on unexpected input or prior LLM output. Guard yourself against this threat by enforcing least privilege and authorization and by placing a human expert in the loop.
Finally, System prompt leakage (#7) applies when the system prompt contains sensitive information (credentials, rules/policy/filter instructions,…). This is especially dangerous if the LLM is used to make sensitive decisions! Mitigate this by separating sensitive information from system prompts, don’t rely on an LLM for rule enforcement and implement external guardrails.
Conclusions:
The combination of classic security problems in a new context and attributes unique to LLMs such as non-deterministic output/behavior, agentic nature, (over)reliance on results and severity of possible consequences makes LLM security hard but essential to address!
Of course, budget and resources are not infinite so threat modeling, risk assessment and risk management become essential in prioritizing risks. Also don’t let a false sense of security cause you to rest on your laurels, include testing in your AI lifecycle to stay ahead. Follow LLASER to gain more insight in how to practice efficient and effective LLM security. Better prevent than cure!
For this article, we would like to thank Prof. Koen Yskout (KU Leuven), expert in security and resilience by design (for systems with and without AI), for sharing his valuable insights during the LLASER kickoff event in Mechelen on 13 November 2025.
Other sources:
OWASP Top 10 for LLM Applications 2025, OWASP Top 10 for LLM Applications 2025, consulted on 20/11/25
Glossary:
(1)RAG system: accesses external knowledge sources such as company information from databases and company documents.



