At MakoLab, we have a profound understanding of how the development of artificial intelligence (AI) and large language models (LLMs) makes security an inherent focal point for developers and organisations using chatbots.
This is why we’ve prepared a checklist that sets out the most important potential security threats to be on the alert for when working on LLM-based AI chatbots. It is intended to serve as a developers’ guide to identifying security gaps and implementing effective mitigation strategies. With the increasing integration of these technologies into everyday operations, it has become crucial to address possible threats that can lead to data breaches, unauthorised access and compromised systems.
If you’re interested in AI chatbot security and want to discover what our experts and special guests have to say in an interactive panel discussion, then sign up for our latest live webinar, “Mastering AI chatbot security”.
We recommended using this checklist as a tool for regularly assessing security and implementing appropriate practices. Each of the points describes a threat and suggests strategies for its mitigation, making for a more informed approach to application security.
This is a vulnerability where user inputs are crafted to alter the behaviour of an LLM in unintended ways. Attackers can manipulate the model’s responses, bypass safety mechanisms and potentially execute harmful commands.
Mitigation strategies
Insufficient checks on LLM outputs can lead to unintended consequences, such as executing dangerous commands, exposing sensitive information or enabling Cross-Site Scripting (XSS) attacks.
Mitigation strategies
Malicious actors can introduce harmful data during the training phase, causing biased or malign behaviour in LLMs. This manipulation undermines the integrity of model outputs.
Mitigation strategies
Attackers can overwhelm an LLM by flooding it with excessive or complex requests, triggering service outages and resource depletion.
Mitigation strategies
LLMs may inadvertently disclose sensitive information, including personally identifiable information (PII), proprietary data or internal system instructions.
Mitigation Strategies
Addressing the security risks associated with AI chatbots is critical to maintaining user trust, protecting sensitive data and ensuring the overall integrity of chatbot applications. Adhering to this checklist will enable organisations to be proactive in mitigating these risks and enhancing the security posture of their AI systems. Ongoing education and adaptation to evolving threats will be vital to safeguarding the future of AI-driven interactions.
The text was created based on:
OWASP Foundation, OWASP Top 10 for Large Language Model Applications
Translated from the Polish by Caryl Swift