Guardrails for LLMs: detect and block hallucinated tool calls to improve safety and reliability.
middleware machine-learning ai language-models ai-safety prompt-engineering llms toolformer hallucination-detection tool-calling agent-safety
-
Updated
Jul 18, 2025 - Go