the LLM vulnerability scanner
-
Updated
Sep 3, 2025 - Python
the LLM vulnerability scanner
🐢 Open-Source Evaluation & Testing library for LLM Agents
The Security Toolkit for LLM Interactions
A.I.G (AI-Infra-Guard) is a comprehensive, intelligent, and easy-to-use AI Red Teaming platform developed by Tencent Zhuque Lab.
Agentic LLM Vulnerability Scanner / AI red teaming kit 🧪
An easy-to-use Python framework to generate adversarial jailbreak prompts.
A security scanner for your LLM agentic workflows
Papers and resources related to the security and privacy of LLMs 🤖
⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs
This repository provides a benchmark for prompt Injection attacks and defenses
Experimental tools to backdoor large language models by re-writing their system prompts at a raw parameter level. This allows you to potentially execute offline remote code execution without running any actual code on the victim's machine or thwart LLM-based fraud/moderation systems.
The fastest Trust Layer for AI Agents
Framework for testing vulnerabilities of large language models (LLM).
Whistleblower is a offensive security tool for testing against system prompt leakage and capability discovery of an AI application exposed through API. Built for AI engineers, security researchers and folks who want to know what's going on inside the LLM-based app they use daily
Ultra-fast, low latency LLM prompt injection/jailbreak detection ⛓️
Framework for LLM evaluation, guardrails and security
An Execution Isolation Architecture for LLM-Based Agentic Systems
A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.
The Open Source Firewall for LLMs. A self-hosted gateway to secure and control AI applications with powerful guardrails.
Code scanner to check for issues in prompts and LLM calls
Add a description, image, and links to the llm-security topic page so that developers can more easily learn about it.
To associate your repository with the llm-security topic, visit your repo's landing page and select "manage topics."