← back

OWASP AI Testing Guide: Security Testing for AI Systems

Dec 9, 2025

owaspai-securitytestingadversarial-mlllm

The OWASP AI Testing Guide is an open-source initiative providing structured methodologies for testing AI systems. Because AI models learn, adapt, and fail in non-deterministic ways, they introduce risks that conventional security testing can’t address.

Why AI-Specific Testing?

Traditional software testing assumes deterministic behavior. AI systems don’t work that way:

Without specialized testing, these vulnerabilities remain invisible.

The Four Pillars

The guide uses a threat-driven methodology aligned with Google’s SAIF framework, decomposing AI systems into four layers:

1. Model Testing

Testing the “brain” of the system:

2. Infrastructure Testing

Securing the compute and storage pipeline:

3. Data Testing

Assuring integrity and privacy of training data:

4. Application Testing

Application-layer vulnerabilities:

Tools

For adversarial testing, the guide recommends:

The AI Testing Guide complements:

Current Status

The project is in active development (Phase 1 as of June 2025) with a public draft on GitHub. Led by Matteo Meucci and Marco Morana, with 23 contributors and growing.

Sources