Webhook API

API reference for passthrough mode webhooks

Overview

In passthrough mode, chans sends transcribed speech to your webhook and expects a text response.

Request Format

POST /your-webhook-url

chans sends a POST request with the following JSON body:

{
"type": "transcription_complete",
"session_id": "room-abc123",
"user_id": "end-user-123",
"timestamp": "2024-01-15T10:30:00.000000",
"data": {
"transcript": "What's my order status?",
"context": null
}
}

Fields

FieldTypeDescription
typestringAlways "transcription_complete" for passthrough calls
session_idstringUnique session/room identifier
user_idstring | nullEnd user identifier (from client connection)
timestampstringISO 8601 timestamp
data.transcriptstringTranscribed user speech
data.contextstring | nullOptional RAG context (if enabled)

Response Format

Your webhook must respond with JSON:

{
"response": "Your order #1234 is out for delivery!"
}

Response Fields

FieldTypeDescription
responsestringText to be spoken back to the user

Timeout

Your webhook has 30 seconds to respond by default. This is configurable per-agent.

If your webhook times out, the user will hear nothing for that turn, and the agent will continue listening.

Authentication

If you configure an API key in your agent settings, chans includes it in the request:

Authorization: Bearer your-api-key

Error Handling

If your webhook returns a non-2xx status code:

  1. The error is logged
  2. The user hears no response for that turn
  3. The agent continues listening for the next utterance

Example Implementation

Node.js / Express

app.post('/webhook', async (req, res) => {
const { data, user_id, session_id } = req.body
const { transcript } = data
// Call your LLM
const response = await yourLLM.chat(transcript, {
userId: user_id,
sessionId: session_id
})
res.json({ response: response.text })
})

Python / FastAPI

@app.post("/webhook")
async def webhook(payload: dict):
transcript = payload["data"]["transcript"]
user_id = payload.get("user_id")
# Call your LLM
response = await your_llm.chat(transcript, user_id=user_id)
return {"response": response}

Event Types

While the main passthrough call uses transcription_complete, chans can also send async events if you configure an events URL:

Event TypeDescription
transcription_completeUser speech transcribed (sync, expects response)
llm_responseAgent response generated
session_startVoice session started
session_endVoice session ended