API v1.0 — 40+ endpoints

Plug-and-Play AI
Content Moderation

Add content moderation to any AI application in minutes. Real-time scoring, policy-based decisions, human review queues, and full compliance audit trails.

terminal
curl -X POST https://api.civitas-ai.com/api/v1/moderate \
  -H "X-API-Key: $CIVITAS_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"content": "Hello world", "source": "comment"}'

Real-Time Scoring

9 content categories scored 0–1 with sub-100ms latency. Toxicity, hate, harassment, PII, spam, and more.

Policy Engine

Define custom policies with per-category thresholds and actions. Allow, warn, block, or escalate to human review.

Compliance Ready

Full audit trail, GDPR data erasure, evidence export, and compliance reporting built in. Generate reports on demand.

Quick Start

Get moderation results in three steps.

1

Get your API key

Request a key from your Civitas AI dashboard or contact support@civitas-ai.com. Store it securely as an environment variable:

export CIVITAS_API_KEY="civ_live_abc123..."
2

Send your first request

curl -X POST https://api.civitas-ai.com/api/v1/moderate \
  -H "X-API-Key: $CIVITAS_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "content": "Text to moderate",
    "source": "comment",
    "context_metadata": {
      "user_id": "usr_123",
      "channel": "public-chat"
    }
  }'
import requests
import os

response = requests.post(
    "https://api.civitas-ai.com/api/v1/moderate",
    headers={
        "X-API-Key": os.environ["CIVITAS_API_KEY"],
        "Content-Type": "application/json",
    },
    json={
        "content": "Text to moderate",
        "source": "comment",
        "context_metadata": {
            "user_id": "usr_123",
            "channel": "public-chat",
        },
    },
)

result = response.json()
print(f"Action: {result['action']}")
print(f"Confidence: {result['confidence']}")
print(f"Scores: {result['category_scores']}")
const response = await fetch(
  "https://api.civitas-ai.com/api/v1/moderate",
  {
    method: "POST",
    headers: {
      "X-API-Key": process.env.CIVITAS_API_KEY,
      "Content-Type": "application/json",
    },
    body: JSON.stringify({
      content: "Text to moderate",
      source: "comment",
      context_metadata: {
        user_id: "usr_123",
        channel: "public-chat",
      },
    }),
  }
);

const result = await response.json();
console.log(`Action: ${result.action}`);
console.log(`Confidence: ${result.confidence}`);
package main

import (
    "bytes"
    "encoding/json"
    "fmt"
    "net/http"
    "os"
)

func main() {
    payload, _ := json.Marshal(map[string]interface{}{
        "content": "Text to moderate",
        "source":  "comment",
        "context_metadata": map[string]string{
            "user_id": "usr_123",
            "channel": "public-chat",
        },
    })

    req, _ := http.NewRequest("POST",
        "https://api.civitas-ai.com/api/v1/moderate",
        bytes.NewBuffer(payload))
    req.Header.Set("X-API-Key", os.Getenv("CIVITAS_API_KEY"))
    req.Header.Set("Content-Type", "application/json")

    resp, err := http.DefaultClient.Do(req)
    if err != nil {
        panic(err)
    }
    defer resp.Body.Close()

    var result map[string]interface{}
    json.NewDecoder(resp.Body).Decode(&result)
    fmt.Printf("Action: %s\n", result["action"])
}
3

Handle the response

The API returns a decision with category scores, an action, and an explanation:

{
  "decision_id": "dec_a1b2c3d4-e5f6-7890-abcd-ef1234567890",
  "submission_id": "sub_9f8e7d6c",
  "action": "allow",
  "confidence": 0.97,
  "category_scores": {
    "toxicity": 0.02,
    "hate": 0.01,
    "harassment": 0.01,
    "sexual_content": 0.00,
    "violence": 0.00,
    "profanity": 0.03,
    "self_harm": 0.00,
    "spam": 0.05,
    "pii": 0.00
  },
  "explanation": "Content is benign with no policy violations detected.",
  "policy_applied": "default-v1",
  "policy_version": "1.0.0",
  "requires_review": false,
  "detected_language": "en",
  "timestamp": "2025-01-15T10:30:00Z"
}

Key fields:

  • actionallow, warn, block, or escalate
  • category_scores — 0–1 probability per category
  • confidence — overall decision confidence
  • requires_review — whether human review is needed
  • decision_id — unique ID for audit trail lookups

API Reference

Full OpenAPI 3.0.3 specification rendered with Redoc. Search endpoints, view schemas, and try examples.

Loading API documentation...

Authentication

All API requests (except /health and /metrics) require an API key.

Using Your API Key

Pass your key in the X-API-Key header on every request:

curl -H "X-API-Key: civ_live_abc123..." \
  https://api.civitas-ai.com/api/v1/policies

Key Prefixes

Prefix Environment
civ_live_Production
civ_test_Staging / Test
civ_dev_Development

Key Rotation

Rotate keys without downtime using the rotate endpoint. The old key is revoked after the new one is issued.

curl -X POST \
  https://api.civitas-ai.com/api/v1/api-keys/usr_123/rotate \
  -H "X-API-Key: $CIVITAS_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"name": "production-key-v2"}'

Security Best Practices

  • Store keys in environment variables or a secrets manager, never in code
  • Use separate keys for production, staging, and development
  • Rotate keys every 90 days or immediately if compromised
  • Monitor last_used_at in the key listing to detect unused or leaked keys
  • Never send keys over unencrypted channels — HTTPS is enforced on all endpoints

Webhooks

Receive real-time notifications when moderation events occur.

Register a Webhook

curl -X POST https://api.civitas-ai.com/api/v1/webhooks \
  -H "X-API-Key: $CIVITAS_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "url": "https://your-app.com/webhooks/civitas",
    "event_types": [
      "moderation.completed",
      "review.required",
      "review.completed"
    ],
    "description": "Production moderation events"
  }'

Event Types

Event Triggered When
moderation.completedA moderation decision is finalized (sync or async)
review.requiredContent is escalated to the human review queue
review.completedA human reviewer submits a decision
policy.updatedA moderation policy is created or modified

Webhook Payload

{
  "event": "moderation.completed",
  "timestamp": "2025-01-15T10:30:00Z",
  "data": {
    "decision_id": "dec_a1b2c3d4",
    "action": "block",
    "confidence": 0.95,
    "category_scores": { "toxicity": 0.92, "hate": 0.85 },
    "policy_applied": "strict-v2"
  }
}

Signature Verification (HMAC-SHA256)

Every webhook request includes a X-Civitas-Signature header. Verify it with your webhook secret to ensure authenticity.

import hmac
import hashlib

def verify_webhook(payload: bytes, signature: str, secret: str) -> bool:
    """Verify Civitas webhook signature using constant-time comparison."""
    expected = hmac.new(
        secret.encode("utf-8"),
        payload,
        hashlib.sha256,
    ).hexdigest()
    return hmac.compare_digest(f"sha256={expected}", signature)

# In your webhook handler:
# signature = request.headers["X-Civitas-Signature"]
# is_valid = verify_webhook(request.body, signature, WEBHOOK_SECRET)
import { createHmac, timingSafeEqual } from "crypto";

function verifyWebhook(payload, signature, secret) {
  const expected = `sha256=${createHmac("sha256", secret)
    .update(payload)
    .digest("hex")}`;
  return timingSafeEqual(
    Buffer.from(signature),
    Buffer.from(expected)
  );
}

// In your Express handler:
// const sig = req.headers["x-civitas-signature"];
// const valid = verifyWebhook(req.rawBody, sig, WEBHOOK_SECRET);
package webhook

import (
    "crypto/hmac"
    "crypto/sha256"
    "encoding/hex"
    "fmt"
)

func VerifySignature(payload []byte, signature, secret string) bool {
    mac := hmac.New(sha256.New, []byte(secret))
    mac.Write(payload)
    expected := fmt.Sprintf("sha256=%s", hex.EncodeToString(mac.Sum(nil)))
    return hmac.Equal([]byte(expected), []byte(signature))
}

// In your HTTP handler:
// sig := r.Header.Get("X-Civitas-Signature")
// valid := VerifySignature(body, sig, webhookSecret)

Rate Limiting

Rate limits protect API stability. Every response includes headers with your current usage.

Response Headers

Header Description Example
X-RateLimit-Limit Max requests per window 1000
X-RateLimit-Remaining Requests remaining 847
X-RateLimit-Reset Unix timestamp when window resets 1705312200

When You Hit the Limit

You'll receive a 429 Too Many Requests response. Use the X-RateLimit-Reset header to determine when to retry.

{
  "error": "rate_limit_exceeded",
  "message": "Rate limit exceeded. Try again in 32 seconds.",
  "details": {
    "limit": 1000,
    "remaining": 0,
    "reset_at": 1705312200
  },
  "request_id": "req_f8d9e0a1"
}

Exponential Backoff Strategy

import time
import requests

def moderate_with_retry(content, max_retries=3):
    url = "https://api.civitas-ai.com/api/v1/moderate"
    headers = {"X-API-Key": os.environ["CIVITAS_API_KEY"]}

    for attempt in range(max_retries):
        resp = requests.post(url, headers=headers, json={"content": content})

        if resp.status_code == 429:
            reset = int(resp.headers.get("X-RateLimit-Reset", 0))
            wait = max(reset - int(time.time()), 1)
            # Add jitter to avoid thundering herd
            wait = min(wait, 2 ** attempt + random.random())
            time.sleep(wait)
            continue

        resp.raise_for_status()
        return resp.json()

    raise Exception("Max retries exceeded")
async function moderateWithRetry(content, maxRetries = 3) {
  const url = "https://api.civitas-ai.com/api/v1/moderate";

  for (let attempt = 0; attempt < maxRetries; attempt++) {
    const resp = await fetch(url, {
      method: "POST",
      headers: {
        "X-API-Key": process.env.CIVITAS_API_KEY,
        "Content-Type": "application/json",
      },
      body: JSON.stringify({ content }),
    });

    if (resp.status === 429) {
      const reset = parseInt(resp.headers.get("X-RateLimit-Reset") || "0");
      const wait = Math.max(reset - Math.floor(Date.now() / 1000), 1);
      const jitter = Math.min(wait, 2 ** attempt + Math.random());
      await new Promise((r) => setTimeout(r, jitter * 1000));
      continue;
    }

    if (!resp.ok) throw new Error(`HTTP ${resp.status}`);
    return await resp.json();
  }
  throw new Error("Max retries exceeded");
}
func ModerateWithRetry(content string, maxRetries int) (map[string]interface{}, error) {
    url := "https://api.civitas-ai.com/api/v1/moderate"

    for attempt := 0; attempt < maxRetries; attempt++ {
        payload, _ := json.Marshal(map[string]string{"content": content})
        req, _ := http.NewRequest("POST", url, bytes.NewBuffer(payload))
        req.Header.Set("X-API-Key", os.Getenv("CIVITAS_API_KEY"))
        req.Header.Set("Content-Type", "application/json")

        resp, err := http.DefaultClient.Do(req)
        if err != nil {
            return nil, err
        }

        if resp.StatusCode == 429 {
            resetStr := resp.Header.Get("X-RateLimit-Reset")
            reset, _ := strconv.ParseInt(resetStr, 10, 64)
            wait := time.Until(time.Unix(reset, 0))
            jitter := time.Duration(math.Min(
                float64(wait),
                math.Pow(2, float64(attempt))+rand.Float64(),
            )) * time.Second
            resp.Body.Close()
            time.Sleep(jitter)
            continue
        }

        defer resp.Body.Close()
        var result map[string]interface{}
        json.NewDecoder(resp.Body).Decode(&result)
        return result, nil
    }
    return nil, fmt.Errorf("max retries exceeded")
}

Integration Patterns

Common architectures for adding Civitas moderation to your stack.

1. AI Guardrails (Pre/Post LLM Moderation)

Moderate both user input before sending to your LLM and the LLM output before returning to the user.

User Input
Civitas
Pre-check
Your LLM
Civitas
Post-check
User
async def ai_guardrail(user_message: str) -> str:
    # Pre-check: moderate user input
    pre = requests.post(MODERATE_URL, headers=HEADERS, json={
        "content": user_message, "source": "user-input"
    }).json()
    if pre["action"] == "block":
        return "Your message was flagged. Please rephrase."

    # Send to LLM
    llm_response = await call_your_llm(user_message)

    # Post-check: moderate LLM output
    post = requests.post(MODERATE_URL, headers=HEADERS, json={
        "content": llm_response, "source": "llm-output"
    }).json()
    if post["action"] == "block":
        return "I can't provide that response. Let me try again."

    return llm_response

2. Real-Time Chat Moderation

For latency-sensitive chat, use async moderation with webhook callbacks. Display messages optimistically and take action if flagged.

Chat Message
Your Server
Display immediately
Civitas Async
POST /moderate/async
Webhook
Callback
// Submit for async moderation
const { request_id } = await fetch(
  "https://api.civitas-ai.com/api/v1/moderate/async",
  {
    method: "POST",
    headers: { "X-API-Key": API_KEY, "Content-Type": "application/json" },
    body: JSON.stringify({
      content: message.text,
      source: "chat",
      callback_url: "https://your-app.com/webhooks/civitas",
      context_metadata: { message_id: message.id, room_id: room.id },
    }),
  }
).then((r) => r.json());

// In your webhook handler, take action on flagged messages:
app.post("/webhooks/civitas", (req, res) => {
  if (verifyWebhook(req.rawBody, req.headers["x-civitas-signature"], SECRET)) {
    const { data } = req.body;
    if (data.action === "block") {
      removeMessage(data.context_metadata.message_id);
    }
    res.sendStatus(200);
  }
});

3. Batch Processing

Moderate up to 100 items per request. Ideal for backfill, imports, or periodic scanning of existing content.

import requests

items = [
    {"content": text, "item_id": f"item_{i}", "source": "backfill"}
    for i, text in enumerate(content_to_scan)
]

# Process in batches of 100
for batch_start in range(0, len(items), 100):
    batch = items[batch_start:batch_start + 100]
    resp = requests.post(
        "https://api.civitas-ai.com/api/v1/moderate/batch",
        headers={"X-API-Key": os.environ["CIVITAS_API_KEY"]},
        json={"items": batch},
    ).json()

    print(f"Batch summary: {resp['summary']}")
    for result in resp["results"]:
        if result["action"] in ("block", "escalate"):
            flag_content(result["item_id"], result["action"])

4. Compliance Reporting

Generate audit-ready reports and export evidence records for regulators or internal compliance teams.

# Generate a compliance report
curl -X POST https://api.civitas-ai.com/api/v1/reports/generate \
  -H "X-API-Key: $CIVITAS_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "report_type": "compliance",
    "start_date": "2025-01-01T00:00:00Z",
    "end_date": "2025-01-31T23:59:59Z",
    "include_evidence": true,
    "format": "pdf"
  }'

# Export evidence records as CSV
curl "https://api.civitas-ai.com/api/v1/evidence/export?\
format=csv&from_date=2025-01-01T00:00:00Z&to_date=2025-01-31T23:59:59Z" \
  -H "X-API-Key: $CIVITAS_API_KEY" \
  -o evidence_jan_2025.csv

Error Reference

All errors follow a consistent envelope format with a machine-readable code and a human-readable message.

Error Envelope

{
  "error": "validation_error",
  "message": "The 'content' field is required and must be a non-empty string.",
  "details": {
    "field": "content",
    "constraint": "required"
  },
  "request_id": "req_f8d9e0a1"
}

HTTP Status Codes

Code Error Description
400 validation_error Invalid request parameters
401 unauthorized Missing or invalid API key
403 forbidden Insufficient permissions
404 not_found Resource not found
429 rate_limit_exceeded Too many requests
500 internal_error Server error
502 bad_gateway Upstream service unavailable

Common Errors & Solutions

400 "content" field is required

The request body is missing the content field or it's empty. Ensure your JSON body includes {"content": "your text"}.

400 content exceeds maximum length

Content exceeds 100,000 characters. Split long documents and use the batch endpoint, or truncate to the relevant section.

401 invalid API key format

The key doesn't match expected format. Keys start with civ_live_, civ_test_, or civ_dev_. Check for extra whitespace or truncation.

429 rate limit exceeded

You've exceeded your plan's request quota. Check X-RateLimit-Remaining before sending requests. Use batch endpoints to reduce request count.

500 internal processing error

An unexpected error occurred. Save the request_id from the response and contact support@civitas-ai.com. Retry after 1–5 seconds with backoff.