Langfuse in the Context of LLM Guard
Dec 3, 2025
llmsecurityobservabilitypython
Langfuse is an open-source LLM observability and analytics platform, while LLM Guard is a security toolkit for protecting LLM applications. They complement each other well in production LLM systems.
What Each Does
LLM Guard provides security scanners for:
- Prompt injection detection
- PII/sensitive data detection and anonymization
- Toxic/harmful content filtering
- Jailbreak attempt detection
- Output validation
Langfuse provides:
- Tracing and logging of LLM calls
- Cost and latency monitoring
- Prompt versioning and management
- Evaluation and scoring
- User session tracking
How They Work Together
When you integrate both, Langfuse can observe and log what LLM Guard is doing:
- Trace security events – Log when LLM Guard blocks or modifies a request, giving you visibility into attack patterns or false positives.
- Monitor performance impact – Track the latency overhead LLM Guard adds to your pipeline.
- Analyze blocked content – Use Langfuse’s analytics to understand what types of inputs are being flagged and why.
- Debug false positives – When LLM Guard incorrectly blocks legitimate requests, Langfuse traces help you identify and fix the issue.
Example Integration Pattern
from langfuse import Langfuse
from llm_guard import scan_prompt, scan_output
langfuse = Langfuse()
# Create a trace for the request
trace = langfuse.trace(name="llm-request")
# Scan input with LLM Guard
sanitized_prompt, results, is_valid = scan_prompt(scanners, prompt)
trace.span(name="input-scan", metadata={"valid": is_valid, "results": results})
# If valid, call LLM and scan output
# Log everything to Langfuse for observability
This combination gives you both security (LLM Guard) and visibility (Langfuse) into your LLM application.