Huntr: AI/ML Bug Bounty Platform for Open Source Security
Dec 9, 2025
securityaimlbug-bountyopen-sourcevulnerability-disclosure
Huntr is the world’s first bug bounty platform dedicated exclusively to AI/ML security. Backed by a community of 17,000+ security researchers, it focuses on finding vulnerabilities in open-source AI/ML tools, frameworks, model file formats, and foundation models.
History
Originally founded in 2020 by 418Sec’s Adam Nygate as huntr.dev, the platform was acquired by Protect AI in August 2023. By 2022, huntr had become the world’s 5th largest Certified Naming Authority (CNA) for CVEs.
How It Works
- Submit: Researchers find vulnerabilities and submit via huntr’s secure form
- Validate: Maintainers have 31 days to respond. Huntr reaches out every 7 days
- Reward: Valid reports earn bounties (up to $50,000 for critical findings)
- Publish: Reports go public after 90 days (extensions available)
- CVE: Valid open-source vulnerabilities are assigned CVE numbers
Bounties are paid monthly via Stripe Connect.
Two Bug Bounty Programs
Open Source Vulnerabilities (OSV)
- Security flaws in open-source AI/ML apps and libraries
- 125+ ML supply chain repositories in scope
- Tools like MLFlow, Ray, Triton Inference Server, LocalAI, etc.
Model File Vulnerabilities (MFV)
- Vulnerabilities in ML model file formats and loading processes
- Code execution at model load time via manipulated headers/metadata
- Embedded backdoors that alter inference under specific conditions
- Run in partnership with Hugging Face
The AI Security Problem
The number of AI-related zero-days has tripled since November 2023. When huntr launched their AI/ML program, they received about 3 vulnerability reports per week. That grew to 15+ per day.
From Protect AI’s monthly vulnerability reports:
- February 2024: 8 vulnerabilities (1 critical, 7 high severity)
- April 2024: 48 vulnerabilities (220% increase from November 2023)
- May 2024: 32 vulnerabilities
Notable Findings
- CVE-2024-22476 (CVSS 10.0): Improper input validation in Intel Neural Compressor allowing remote privilege escalation
- CVE-2024-7474/7475 (CVSS 9.1): IDOR and improper access control in Lunary
- CVE-2024-5982 (CVSS 9.1): Path traversal in ChuanhuChatGPT leading to RCE
- ShadowRay campaign: Vulnerabilities in Ray (AI infrastructure) first discovered on huntr
Remote Code Execution (RCE) is the most prevalent threat—these vulnerable tools are downloaded thousands of times monthly to build enterprise AI systems.
Why It Matters
Open-source AI/ML tools often ship with vulnerabilities that can lead to complete system takeover. Unlike traditional software, AI systems introduce new attack surfaces:
- Model file deserialization (pickle, safetensors, ONNX)
- Inference manipulation and backdoors
- Training data poisoning
- Plugin/extension sandboxing failures
Huntr fills a critical gap by providing structured disclosure and incentives for the AI security research community.
Getting Started
- Create an account at huntr.com
- Review the participation guidelines
- Check the bounties page for in-scope targets
- Submit findings through their secure form (no email submissions)