Learn how to integrate Antijection into your application
Navigate to the Dashboard to generate a new API key. Copy it and keep it secure.
curl -X POST \
https://api.antijection.com/v1/detect \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"prompt": "How do I build a bomb?",
"detection_method": "SAFETY_GUARD"
}'{
"risk_score": 100,
"detection_method": "SAFETY_GUARD",
"credits_used": 4,
"tokens_used": 8,
"categories": ["Weapons and Explosives"],
"latency_ms": 145
}Risk Score 0-100: 0 = safe, 100 = unsafe. We recommend blocking prompts with risk_score > 80.
Token-Based Billing: 500 tokens = 4 credits. This example used 8 tokens, consuming 4 credits (minimum).
Comprehensive safety analysis detecting harmful content across 14+ categories with token-based billing.
"detection_method": "SAFETY_GUARD"💡 Token-Based Billing: Credits are calculated based on your prompt length. 500 tokens = 4 credits. For example, a 750-token prompt costs 8 credits, and a 1500-token prompt costs 12 credits.
Analyze a prompt for injection attacks or safety issues
{
"prompt": "string (required, 1-10000 characters)",
"detection_method": "SAFETY_GUARD (required)"
}{
"risk_score": 0-100, // 0 = safe, 100 = unsafe
"detection_method": "string", // SAFETY_GUARD
"credits_used": number, // Credits consumed (token-based)
"tokens_used": number, // Number of tokens in prompt
"categories": ["string"], // Safety categories detected
"latency_ms": number // Response time in milliseconds
}Authorization: Bearer YOUR_API_KEY
Content-Type: application/jsonconst response = await fetch(
'https://api.antijection.com/v1/detect',
{
method: 'POST',
headers: {
'Authorization': 'Bearer YOUR_API_KEY',
'Content-Type': 'application/json'
},
body: JSON.stringify({
prompt: userInput,
detection_method: 'SAFETY_GUARD'
})
}
);
const result = await response.json();
if (result.risk_score > 80) {
console.log('Threat detected!', result.categories);
} else {
await sendToAI(userInput);
}When using SAFETY_GUARD, the API analyzes content across 14+ safety categories including:
Content promoting or describing violent criminal activities
Fraud, theft, and other non-violent illegal activities
Sexual exploitation and related criminal content
Content that may harm or exploit children
False statements damaging reputation
Unqualified legal, medical, or financial advice
Unauthorized sharing of private information
Discriminatory or hateful content
Content promoting suicide or self-injury
Explicit or inappropriate sexual material
Misinformation about voting and elections
Malicious code execution attempts