← back

Langfuse in the Context of LLM Guard

Dec 3, 2025

llmsecurityobservabilitypython

Langfuse is an open-source LLM observability and analytics platform, while LLM Guard is a security toolkit for protecting LLM applications. They complement each other well in production LLM systems.

What Each Does

LLM Guard provides security scanners for:

Langfuse provides:

How They Work Together

When you integrate both, Langfuse can observe and log what LLM Guard is doing:

  1. Trace security events – Log when LLM Guard blocks or modifies a request, giving you visibility into attack patterns or false positives.
  2. Monitor performance impact – Track the latency overhead LLM Guard adds to your pipeline.
  3. Analyze blocked content – Use Langfuse’s analytics to understand what types of inputs are being flagged and why.
  4. Debug false positives – When LLM Guard incorrectly blocks legitimate requests, Langfuse traces help you identify and fix the issue.

Example Integration Pattern

from langfuse import Langfuse
from llm_guard import scan_prompt, scan_output

langfuse = Langfuse()

# Create a trace for the request
trace = langfuse.trace(name="llm-request")

# Scan input with LLM Guard
sanitized_prompt, results, is_valid = scan_prompt(scanners, prompt)
trace.span(name="input-scan", metadata={"valid": is_valid, "results": results})

# If valid, call LLM and scan output
# Log everything to Langfuse for observability

This combination gives you both security (LLM Guard) and visibility (Langfuse) into your LLM application.