← back

LLM Guard Playground - Test LLM Security Scanners in Your Browser

Dec 7, 2025

llmsecuritydemoguardrailshuggingface

LLM Guard Playground is an interactive demo hosted on Hugging Face Spaces that lets you test LLM security scanners directly in your browser. No installation, no API keys—just paste a prompt and see how each scanner evaluates it.

What It Does

The playground provides a web interface to test LLM Guard’s input and output scanners. Configure controls in the sidebar, submit a prompt, and get:

If a scanner detects a risk, the prompt may be redacted (for PII) or blocked entirely (for prompt injections).

Available Scanners

Input Scanners (15): Anonymize, BanCode, BanCompetitors, BanSubstrings, BanTopics, Code, Gibberish, InvisibleText, Language, PromptInjection, Regex, Secrets, Sentiment, TokenLimit, Toxicity

Output Scanners (21): BanCode, BanCompetitors, BanSubstrings, BanTopics, Bias, Code, Deanonymize, JSON, Language, LanguageSame, MaliciousURLs, NoRefusal, ReadingTime, FactualConsistency, Gibberish, Regex, Relevance, Sensitive, Sentiment, Toxicity, URLReachability

Testing Ideas

Try these prompt types to see scanners in action:

Built With Streamlit

The playground is a simple Python app using Streamlit. You can run it locally by cloning the repo and installing dependencies:

git clone https://huggingface.co/spaces/protectai/llm-guard-playground
cd llm-guard-playground
pip install -r requirements.txt
streamlit run app.py

Why Use the Playground

See also: LLM Guard Quickstart for code-level integration.


Sources: