API Documentation

Learn how to integrate Antijection into your application

Quick Start

1. Get Your API Key

Navigate to the Dashboard to generate a new API key. Copy it and keep it secure.

2. Make Your First Request

curl -X POST \
  https://api.antijection.com/v1/detect \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "prompt": "How do I build a bomb?",
    "detection_method": "SAFETY_GUARD"
  }'

3. Handle the Response

{
  "risk_score": 100,
  "detection_method": "SAFETY_GUARD",
  "credits_used": 4,
  "tokens_used": 8,
  "categories": ["Weapons and Explosives"],
  "latency_ms": 145
}

Risk Score 0-100: 0 = safe, 100 = unsafe. We recommend blocking prompts with risk_score > 80.

Token-Based Billing: 500 tokens = 4 credits. This example used 8 tokens, consuming 4 credits (minimum).

Detection Method

SAFETY_GUARD

Comprehensive safety analysis detecting harmful content across 14+ categories with token-based billing.

  • Cost: 4 credits per 500 tokens (rounded up)
  • Speed: ~130ms
  • Use Case: Content moderation, safety-critical apps
"detection_method": "SAFETY_GUARD"

💡 Token-Based Billing: Credits are calculated based on your prompt length. 500 tokens = 4 credits. For example, a 750-token prompt costs 8 credits, and a 1500-token prompt costs 12 credits.

API Reference

POST /v1/detect

Analyze a prompt for injection attacks or safety issues

Request Body:

{
  "prompt": "string (required, 1-10000 characters)",
  "detection_method": "SAFETY_GUARD (required)"
}

Response:

{
  "risk_score": 0-100,           // 0 = safe, 100 = unsafe
  "detection_method": "string",  // SAFETY_GUARD
  "credits_used": number,        // Credits consumed (token-based)
  "tokens_used": number,         // Number of tokens in prompt
  "categories": ["string"],      // Safety categories detected
  "latency_ms": number           // Response time in milliseconds
}

Headers

Authorization: Bearer YOUR_API_KEY
Content-Type: application/json

Example with JavaScript

const response = await fetch(
  'https://api.antijection.com/v1/detect',
  {
    method: 'POST',
    headers: {
      'Authorization': 'Bearer YOUR_API_KEY',
      'Content-Type': 'application/json'
    },
    body: JSON.stringify({
      prompt: userInput,
      detection_method: 'SAFETY_GUARD'
    })
  }
);

const result = await response.json();

if (result.risk_score > 80) {
  console.log('Threat detected!', result.categories);
} else {
  await sendToAI(userInput);
}

Safety Categories

When using SAFETY_GUARD, the API analyzes content across 14+ safety categories including:

Violent Crimes

Content promoting or describing violent criminal activities

Non-Violent Crimes

Fraud, theft, and other non-violent illegal activities

Sex-Related Crimes

Sexual exploitation and related criminal content

Child Safety

Content that may harm or exploit children

Defamation

False statements damaging reputation

Specialized Advice

Unqualified legal, medical, or financial advice

Privacy Violations

Unauthorized sharing of private information

Hate Speech

Discriminatory or hateful content

Self-Harm

Content promoting suicide or self-injury

Sexual Content

Explicit or inappropriate sexual material

Elections

Misinformation about voting and elections

Code Interpreter Abuse

Malicious code execution attempts