Analyze a prompt for threats.
Request:
{
"prompt": "string",
"context": ["string"], // optional
"engines": ["injection", "pii"] // optional, defaults to all
}
Response:
{
"is_safe": true,
"risk_score": 0.15,
"threats": [],
"blocked": false,
"engines": [
{
"name": "injection",
"is_safe": true,
"score": 0.1
},
{
"name": "pii",
"is_safe": true,
"score": 0.2
}
],
"processing_time_ms": 45
}
Health check endpoint.
Response:
{
"status": "healthy",
"version": "1.0.0",
"engines_loaded": 59
}
List available engines.
Response:
{
"engines": [
{
"name": "injection",
"enabled": true,
"description": "Prompt injection detection"
},
{
"name": "pii",
"enabled": true,
"description": "PII/secrets detection"
}
]
}
| Code | Description |
|---|---|
| 200 | Success |
| 400 | Invalid request |
| 422 | Validation error |
| 500 | Internal error |
| 503 | Service unavailable |
| Plan | Requests/min |
|---|---|
| Community | 60 |
| Enterprise | Unlimited |
Get coverage summary for all compliance frameworks.
Response:
{
"frameworks": {
"owasp_llm_top_10": {"covered": 10, "total": 10, "percent": 100},
"owasp_agentic_ai": {"covered": 10, "total": 10, "percent": 100},
"eu_ai_act": {"covered": 7, "total": 10, "percent": 70},
"nist_ai_rmf": {"covered": 8, "total": 10, "percent": 80}
}
}
Generate compliance report.
Request:
{
"frameworks": ["owasp_llm", "eu_ai_act"],
"format": "pdf",
"date_range": {"from": "2026-01-01", "to": "2026-01-31"}
}
Create custom security requirement set.
Get requirements by ID.
Check text against requirement set.
Analyze architecture documents for AI security risks.
Request:
{
"content": "## Architecture\nOur system uses RAG with external documents...",
"format": "markdown"
}
Response:
{
"risks": [
{
"category": "rag_poisoning",
"severity": "high",
"owasp": "LLM03",
"description": "External documents may contain hidden instructions"
}
]
}