LLM Guard Quickstart - Scanning and Sanitizing LLM Prompts
Dec 3, 2025
llmsecuritypythonguardrails
LLM Guard is a library for scanning and sanitizing LLM prompts and outputs. Scanners can be used individually or combined using scan_prompt / scan_output functions.
Individual Scanner Usage
Import a specific scanner (e.g., BanTopics, Bias). Each scan returns three values:
sanitized_text- the cleaned/modified textis_valid- boolean indicating if the text passedrisk_score- numerical risk assessment
Input scanners evaluate prompts; output scanners evaluate model responses.
Multiple Scanners
Scanners execute in the order they’re passed to the function:
- For prompts: Use
scan_prompt()with input scanners likeAnonymize,Toxicity,TokenLimit,PromptInjection - For outputs: Use
scan_output()with output scanners likeDeanonymize,NoRefusal,Relevance,Sensitive
Vault Feature
A Vault object is used for anonymization/deanonymization workflows - it stores mappings of sensitive data so you can anonymize on input and deanonymize on output.
Performance Tip
Set fail_fast=True to stop scanning after the first invalid result, reducing latency.
Validation Pattern
Check results with:
any(not result for result in results_valid.values())
This detects if any scanner flagged the prompt/output as invalid.