← back

LLM Guard Quickstart - Scanning and Sanitizing LLM Prompts

Dec 3, 2025

llmsecuritypythonguardrails

LLM Guard is a library for scanning and sanitizing LLM prompts and outputs. Scanners can be used individually or combined using scan_prompt / scan_output functions.

Individual Scanner Usage

Import a specific scanner (e.g., BanTopics, Bias). Each scan returns three values:

Input scanners evaluate prompts; output scanners evaluate model responses.

Multiple Scanners

Scanners execute in the order they’re passed to the function:

Vault Feature

A Vault object is used for anonymization/deanonymization workflows - it stores mappings of sensitive data so you can anonymize on input and deanonymize on output.

Performance Tip

Set fail_fast=True to stop scanning after the first invalid result, reducing latency.

Validation Pattern

Check results with:

any(not result for result in results_valid.values())

This detects if any scanner flagged the prompt/output as invalid.


LLM Guard Quickstart Documentation