Breaker AI - Security check for your LLM prompts
-
Updated
Jul 9, 2025 - TypeScript
Breaker AI - Security check for your LLM prompts
🚀 Unofficial Node.js SDK for Prompt Security's Protection API.
AI-powered ethical decision-making using multi-agent tools
A collection of dockerized hacking challenges that focus on breaking out of AI/LLM security mechanisms.
🎯 Generate AI security test conversations with this experimental TypeScript library for prompt injection attacks, designed for security professionals.
A Go-based gRPC service that evaluates AI model prompts and responses using Google Cloud's Model Armor service for content sanitization
Add a description, image, and links to the llm-security topic page so that developers can more easily learn about it.
To associate your repository with the llm-security topic, visit your repo's landing page and select "manage topics."