Monitor Your Agent

Observability Ingest API

Overview

The Observability Ingest API allows you to send production conversation data from your existing Evalion Agent for automated quality evaluation and monitoring. This endpoint accepts either audio files or pre-transcribed conversations and automatically evaluates them against your configured metrics.

Use Cases

  • Production Monitoring: Automatically evaluate all your production calls in real-time
  • Quality Assurance: Track conversation quality metrics across your agent fleet
  • Compliance: Monitor adherence to scripts, policies, and regulations
  • Performance Analytics: Gather data for performance reporting and improvement

How to Use the Observability API

To effectively use the Observability API, follow these steps:

  1. Configure Metrics: Set up which metrics should be evaluated for your agent
  2. Ingest Data: Send conversation data (audio or transcript) via the API
  3. View Results: Access evaluation results in the Monitoring section of the platform

Configuring Metrics for Agents

Before ingesting conversations, you should configure which metrics to evaluate for each agent:

In the Platform

  1. Go to your Project settings
  2. Select the Agent you want to configure
  3. Navigate to the Observability Metrics section
  4. Enable the metrics you want to automatically evaluate for this agent's conversations
  5. Save your configuration

Once configured, all conversations ingested for that agent will automatically be evaluated against the enabled metrics.

monitoring-metrics

Ingesting Data

To send conversation data to Evalion for evaluation, you need to call the Observability Ingest API endpoint with either:

  • Audio files (MP3, OGG, or WAV format) - Evalion will transcribe and evaluate the conversation
  • Transcript files (OpenAI JSON format) - For faster processing when you already have transcripts

Each API call must include metadata such as project ID, agent ID, timestamp, and customer phone number. The system will process your data asynchronously and make results available in the platform within seconds to minutes.

Viewing Ingested Data

After ingesting conversations through this API, you can view and analyze them in the Evalion platform:

  1. Navigate to the Monitoring section in the left sidebar
  2. Click on the Calls menu item
  3. You'll see all ingested conversations with their evaluation results
  4. Click on individual calls to see detailed metrics, transcripts, and scores
calls-table

Ingest API

Endpoint

POST /api/v1/observability/ingest

Required permissions: Project MEMBER or higher

Request Format

The endpoint accepts multipart/form-data with the following parameters:

Query Parameters

ParameterTypeRequiredDescription
project_idUUIDYesUUID of the project
agent_idUUIDYesUUID of the agent that handled the conversation

Form Fields

FieldTypeRequiredDescription
timestampIntegerYesUnix timestamp (seconds since epoch) of when the conversation occurred
customer_phoneStringYesCustomer phone number (max 50 characters)
audio_fileFileNo*Audio file of the conversation (MP3, OGG, or WAV format, max 50 MB)
transcript_fileFileNo*JSON transcript file in OpenAI format (max 5 MB)
external_conversation_idStringNoYour system's conversation ID for reference
extra_metadataStringNoAdditional metadata as a JSON string

*Note: You must provide either audio_file OR transcript_file, but not both.

File Size Restrictions

  • Audio files: Maximum 50 MB
  • Transcript files: Maximum 5 MB

Files exceeding these limits will be rejected with a 413 Request Entity Too Large error.

Response

Success Response (201 Created)

{
  "id": "550e8400-e29b-41d4-a716-446655440000",
  "project_id": "123e4567-e89b-12d3-a456-426614174000",
  "agent_id": "987e6543-e21b-12d3-a456-426614174000",
  "external_conversation_id": "call-12345",
  "timestamp": "2023-10-30T10:30:00Z",
  "customer_phone": "+1234567890",
  "interaction_mode": "voice",
  "call_duration": 180.5,
  "audio_url": "gs://bucket/path/to/audio.mp3",
  "transcript": "... formatted transcript ...",
  "extra_metadata": {
    "call_type": "inbound",
    "duration": 180,
    "metric_ids": ["metric-uuid-1", "metric-uuid-2"]
  },
  "source": "api",
  "agent": {
    "id": "987e6543-e21b-12d3-a456-426614174000",
    "name": "Customer Support Agent"
  },
  "evaluations": [],
  "builtin_evaluations": [],
  "created_at": "2023-10-30T10:32:00Z",
  "updated_at": "2023-10-30T10:32:00Z"
}

Response Fields

FieldTypeDescription
idUUIDUnique identifier for the conversation
project_idUUIDProject ID
agent_idUUIDAgent ID
external_conversation_idStringYour system's conversation ID (if provided)
timestampISO DateTimeWhen the conversation occurred
customer_phoneStringCustomer phone number
interaction_modeString"voice" or "text"
call_durationFloatDuration in seconds (null if not yet calculated)
audio_urlStringGCS URL to the audio file (if audio was uploaded)
transcriptStringThe conversation transcript
extra_metadataObjectAdditional metadata including metric IDs
sourceStringAlways "api" for ingested conversations
agentObjectAgent information
evaluationsArrayList of evaluation results (populated asynchronously)
builtin_evaluationsArrayList of builtin metric evaluations (populated asynchronously)
created_atISO DateTimeWhen the record was created
updated_atISO DateTimeWhen the record was last updated

Evaluation Processing

Asynchronous Processing

When you ingest a conversation, the evaluation process happens asynchronously:

  1. Immediate Response: You receive a 201 response with the conversation record
  2. Background Processing: Evaluations are queued and processed in the background
  3. Results Available: Evaluation results appear in the platform's Monitoring > Calls section within seconds to minutes

Metrics Evaluated

The endpoint evaluates the conversation against:

  1. Agent Metrics: Uses the agent's configured observability metrics
  2. Builtin Metrics: Standard metrics like call duration, conversation flow, etc.

Error Responses

400 Bad Request

{
  "detail": "Must provide either audio_file or transcript_file."
}

Common causes:

  • Missing both audio and transcript files
  • Providing both audio and transcript files
  • Invalid JSON in transcript_file or extra_metadata
  • Invalid transcript format
  • No metrics available for evaluation

401 Unauthorized

{
  "detail": "Not authenticated"
}

Cause: Missing or invalid API token

403 Forbidden

{
  "detail": "Insufficient permissions"
}

Cause: User doesn't have MEMBER or higher permissions on the project

404 Not Found

{
  "detail": "Project {project_id} not found"
}

Common causes:

  • Invalid project_id
  • Invalid agent_id
  • Agent doesn't belong to the specified project

500 Internal Server Error

{
  "detail": "Failed to ingest conversation: {error message}"
}

Common causes:

  • File upload failure
  • Database error
  • Internal processing error

Audio File Ingestion Example

Supported Formats

  • MP3 (.mp3)
  • OGG (.ogg)
  • WAV (.wav)

Example Request (cURL)

curl -X POST "https://api.evalion.ai/api/v1/observability/ingest?project_id=123e4567-e89b-12d3-a456-426614174000&agent_id=987e6543-e21b-12d3-a456-426614174000" \
  -H "Authorization: Bearer YOUR_API_TOKEN" \
  -F "timestamp=1698624000" \
  -F "customer_phone=+1234567890" \
  -F "audio_file=@/path/to/call-recording.mp3" \
  -F "external_conversation_id=call-12345" \
  -F 'extra_metadata={"call_type":"inbound","duration":180}'

Example Request (Python)

import requests
from datetime import datetime

url = "https://api.evalion.ai/api/v1/observability/ingest"

# Your API token
headers = {
    "Authorization": "Bearer YOUR_API_TOKEN"
}

# Query parameters
params = {
    "project_id": "123e4567-e89b-12d3-a456-426614174000",
    "agent_id": "987e6543-e21b-12d3-a456-426614174000"
}

# Form data
files = {
    "audio_file": open("call-recording.mp3", "rb")
}

data = {
    "timestamp": int(datetime.now().timestamp()),
    "customer_phone": "+1234567890",
    "external_conversation_id": "call-12345",
    "extra_metadata": '{"call_type": "inbound", "duration": 180}'
}

response = requests.post(url, headers=headers, params=params, files=files, data=data)

if response.status_code == 201:
    conversation = response.json()
    print(f"Conversation created: {conversation['id']}")
else:
    print(f"Error: {response.status_code} - {response.text}")

Transcript File Ingestion Example

OpenAI Transcript Format

The transcript file should be a JSON array of message objects:

[
  {
    "role": "user",
    "content": "Hello, I would like assistance with my account.",
    "start_time": 0.0,
    "end_time": 3.2
  },
  {
    "role": "assistant",
    "content": "Hello! I'd be happy to help you with your account. Can you please provide your account number?",
    "start_time": 3.5,
    "end_time": 8.1
  },
  {
    "role": "user",
    "content": "Sure, it's 123456789.",
    "start_time": 8.5,
    "end_time": 10.2
  },
  {
    "role": "assistant",
    "content": "Thank you. I've found your account. How can I assist you today?",
    "start_time": 10.5,
    "end_time": 14.0
  }
]

Transcript Field Descriptions

FieldTypeRequiredDescription
roleStringYesEither "user" (customer) or "assistant" (agent)
contentStringYesThe text content of the message
start_timeFloatYesStart time in seconds from the beginning of the conversation
end_timeFloatYesEnd time in seconds from the beginning of the conversation

Example Request (cURL)

curl -X POST "https://api.evalion.ai/api/v1/observability/ingest?project_id=123e4567-e89b-12d3-a456-426614174000&agent_id=987e6543-e21b-12d3-a456-426614174000" \
  -H "Authorization: Bearer YOUR_API_TOKEN" \
  -F "timestamp=1698624000" \
  -F "customer_phone=+1234567890" \
  -F "transcript_file=@/path/to/transcript.json" \
  -F "external_conversation_id=call-12345"

Example Request (Python)

import requests
import json
from datetime import datetime

url = "https://api.evalion.ai/api/v1/observability/ingest"

headers = {
    "Authorization": "Bearer YOUR_API_TOKEN"
}

params = {
    "project_id": "123e4567-e89b-12d3-a456-426614174000",
    "agent_id": "987e6543-e21b-12d3-a456-426614174000"
}

# Create transcript data
transcript_data = [
    {
        "role": "user",
        "content": "Hello, I need help with my order.",
        "start_time": 0.0,
        "end_time": 2.5
    },
    {
        "role": "assistant",
        "content": "Of course! I'd be happy to help. What's your order number?",
        "start_time": 3.0,
        "end_time": 6.5
    }
]

# Save to temporary file
with open("transcript.json", "w") as f:
    json.dump(transcript_data, f)

files = {
    "transcript_file": open("transcript.json", "rb")
}

data = {
    "timestamp": int(datetime.now().timestamp()),
    "customer_phone": "+1234567890",
    "external_conversation_id": "call-12345"
}

response = requests.post(url, headers=headers, params=params, files=files, data=data)

if response.status_code == 201:
    conversation = response.json()
    print(f"Conversation created: {conversation['id']}")
else:
    print(f"Error: {response.status_code} - {response.text}")

Best Practices

1. Error Handling

Always handle errors gracefully and implement retry logic with exponential backoff:

import time
import requests

def ingest_conversation_with_retry(url, headers, params, files, data, max_retries=3):
    for attempt in range(max_retries):
        try:
            response = requests.post(url, headers=headers, params=params, files=files, data=data)
            
            if response.status_code == 201:
                return response.json()
            elif response.status_code >= 500:
                # Server error, retry
                if attempt < max_retries - 1:
                    time.sleep(2 ** attempt)  # Exponential backoff
                    continue
            else:
                # Client error, don't retry
                response.raise_for_status()
                
        except requests.exceptions.RequestException as e:
            if attempt < max_retries - 1:
                time.sleep(2 ** attempt)
                continue
            raise
    
    raise Exception("Max retries exceeded")

2. Batch Processing

If you're ingesting many conversations, process them in batches with rate limiting:

import time

def ingest_conversations_batch(conversations, rate_limit_per_second=10):
    results = []
    for i, conv in enumerate(conversations):
        result = ingest_conversation(conv)
        results.append(result)
        
        # Rate limiting
        if (i + 1) % rate_limit_per_second == 0:
            time.sleep(1)
    
    return results

3. Metadata Usage

Use the extra_metadata field to store additional context:

{
  "call_type": "inbound",
  "department": "sales",
  "campaign_id": "summer-2024",
  "customer_sentiment": "positive",
  "resolution_status": "resolved"
}

This metadata can help with:

  • Filtering conversations in the platform
  • Building custom reports
  • Debugging issues
  • Tracking business metrics

4. External Conversation ID

Always provide an external_conversation_id to:

  • Link Evalion data back to your system
  • Avoid duplicate ingestion
  • Simplify troubleshooting
  • Enable cross-system analytics

5. Metric Configuration

For production use:

  • Configure observability metrics at the agent level in the platform
  • Review and update metrics regularly based on your quality goals

Integration Examples

Webhook Integration

If your phone system supports webhooks, you can automatically ingest conversations:

from fastapi import FastAPI, File, UploadFile
import requests

app = FastAPI()

@app.post("/webhook/call-completed")
async def call_completed_webhook(
    call_id: str,
    timestamp: int,
    customer_phone: str,
    audio_url: str
):
    # Download audio from your system
    audio_response = requests.get(audio_url)
    
    # Ingest to Evalion
    evalion_response = requests.post(
        "https://api.evalion.ai/api/v1/observability/ingest",
        headers={"Authorization": f"Bearer {EVALION_API_TOKEN}"},
        params={
            "project_id": PROJECT_ID,
            "agent_id": AGENT_ID
        },
        files={"audio_file": audio_response.content},
        data={
            "timestamp": timestamp,
            "customer_phone": customer_phone,
            "external_conversation_id": call_id
        }
    )
    
    return {"status": "ingested", "evalion_id": evalion_response.json()["id"]}

Scheduled Batch Processing

For systems without real-time webhooks:

import schedule
import time
from datetime import datetime, timedelta

def fetch_and_ingest_recent_calls():
    # Fetch calls from the last hour from your system
    calls = your_system.get_calls(
        start_time=datetime.now() - timedelta(hours=1),
        end_time=datetime.now()
    )
    
    for call in calls:
        # Check if already ingested
        if not already_ingested(call.id):
            ingest_conversation(call)
            mark_as_ingested(call.id)

# Run every hour
schedule.every().hour.do(fetch_and_ingest_recent_calls)

while True:
    schedule.run_pending()
    time.sleep(60)

Monitoring and Troubleshooting

Check Ingestion Status

After ingesting, you can retrieve the conversation to check evaluation status:

curl -X GET "https://api.evalion.ai/api/v1/observability/conversations/{conversation_id}" \
  -H "Authorization: Bearer YOUR_API_TOKEN"

Common Issues

IssueSolution
Evaluations not appearingWait a few minutes for async processing. Check that metrics are configured for the agent.
"No metrics available" errorConfigure observability metrics for the agent in the platform or provide metrics in the request.
Audio upload failsEnsure audio is in a supported format (MP3, OGG, or WAV) and under size limits. Check network connectivity.
Transcript format errorValidate JSON format and ensure all required fields (role, content, start_time, end_time) are present.
Permission deniedVerify your API token has MEMBER or higher permissions on the project.